1. Tuyển Mod quản lý diễn đàn. Các thành viên xem chi tiết tại đây
  1. 1 người đang xem box này (Thành viên: 0, Khách: 1)
  1. WeAreTheWorld

    WeAreTheWorld Thành viên mới

    Tham gia ngày:
    20/10/2006
    Bài viết:
    1.099
    Đã được thích:
    0
    18. DIGITAL IMAGE PROCESSING
    http://www.mediafire.com/?fg51dzcmntv
    PREFACE
    In January 1978, I began the preface to the first e***ion of Digital Image Processing
    with the following statement:
    The field of image processing has grown considerably during the past decade
    with the increased utilization of imagery in myriad applications coupled with
    improvements in the size, speed, and cost effectiveness of digital computers and
    related signal processing technologies. Image processing has found a significant role
    in scientific, industrial, space, and government applications.
    In January 1991, in the preface to the second e***ion, I stated:
    Thirteen years later as I write this preface to the second e***ion, I find the quoted
    statement still to be valid. The 1980s have been a decade of significant growth and
    maturity in this field. At the beginning of that decade, many image processing techniques
    were of academic interest only; their execution was too slow and too costly.
    Today, thanks to algorithmic and implementation advances, image processing has
    become a vital cost-effective technology in a host of applications.
    Now, in this beginning of the twenty-first century, image processing has become
    a mature engineering discipline. But advances in the theoretical basis of image processing
    continue. Some of the reasons for this third e***ion of the book are to correct
    defects in the second e***ion, delete content of marginal interest, and add discussion
    of new, important topics. Another motivating factor is the inclusion of interactive,
    computer display imaging examples to illustrate image processing concepts. Finally,
    this third e***ion includes computer programming exercises to bolster its theoretical
    content. These exercises can be implemented using the Programmerâ?Ts Imaging Kernel
    System (PIKS) application program interface (API). PIKS is an International
    Standards Organization (ISO) standard library of image processing operators and
    associated utilities. The PIKS Core version is included on a CD affixed to the back
    cover of this book.
    The book is intended to be an â?oindustrial strengthâ? introduction to digital image
    processing to be used as a text for an electrical engineering or computer science
    course in the subject. Also, it can be used as a reference manual for scientists who
    are engaged in image processing research, developers of image processing hardware
    and software systems, and practicing engineers and scientists who use image processing
    as a tool in their applications. Mathematical derivations are provided for
    most algorithms. The reader is assumed to have a basic background in linear system
    theory, vector space algebra, and random processes. Proficiency in C language programming
    is necessary for execution of the image processing programming exercises
    using PIKS.
    The book is divided into six parts. The first three parts cover the basic technologies
    that are needed *****pport image processing applications. Part 1 contains three
    chapters concerned with the characterization of continuous images. Topics include
    the mathematical representation of continuous images, the psychophysical properties
    of human vision, and photometry and colorimetry. In Part 2, image sampling
    and quantization techniques are explored along with the mathematical representation
    of discrete images. Part 3 discusses two-dimensional signal processing techniques,
    including general linear operators and unitary transforms such as the
    Fourier, Hadamard, and Karhunenâ?"Loeve transforms. The final chapter in Part 3
    analyzes and compares linear processing techniques implemented by direct convolution
    and Fourier domain filtering.
    The next two parts of the book cover the two principal application areas of image
    processing. Part 4 presents a discussion of image enhancement and restoration techniques,
    including restoration models, point and spatial restoration, and geometrical
    image modification. Part 5, entitled â?oImage Analysis,â? concentrates on the extraction
    of information from an image. Specific topics include morphological image
    processing, edge detection, image feature extraction, image segmentation, object
    shape analysis, and object detection.
    Part 6 discusses the software implementation of image processing applications.
    This part describes the PIKS API and explains its use as a means of implementing
    image processing algorithms. Image processing programming exercises are included
    in Part 6.
    This third e***ion represents a major revision of the second e***ion. In ad***ion to
    Part 6, new topics include an expanded description of color spaces, the Hartley and
    Daubechies transforms, wavelet filtering, watershed and snake image segmentation,
    and Mellin transform matched filtering. Many of the photographic examples in the
    book are supplemented by executable programs for which readers can adjust algorithm
    parameters and even substitute their own source images.
    Although readers should find this book reasonably comprehensive, many important
    topics allied to the field of digital image processing have been omitted to limit
    the size and cost of the book. Among the most prominent omissions are the topics of
    pattern recognition, image reconstruction from projections, image understanding,
    image coding, scientific visualization, and computer graphics. References to some
    of these topics are provided in the bibliography.
    WILLIAM K. PRATT
    Los Altos, California
    August 2000
  2. WeAreTheWorld

    WeAreTheWorld Thành viên mới

    Tham gia ngày:
    20/10/2006
    Bài viết:
    1.099
    Đã được thích:
    0

    [​IMG]
    Các pác down òm ọp cả mà chẳng thấy ai lên tiếng nhỉ, pác nèo có nhu cầu, cứ vô tư post order lên nhé.
    (Nói nhỏ: bác nào rộng rãi, xin vote 5*, thía thui)!
  3. badinh

    badinh Thành viên mới

    Tham gia ngày:
    02/04/2004
    Bài viết:
    806
    Đã được thích:
    0
    Bạn up tiếp đi mình vừa load hết chỗ sách bạn úp rồi đó .
    Cảm ơn bạn nhiều .
    Đã vote 5*
  4. WeAreTheWorld

    WeAreTheWorld Thành viên mới

    Tham gia ngày:
    20/10/2006
    Bài viết:
    1.099
    Đã được thích:
    0
    Bạn cho 1 cái LIST đi, up load những cuốn kg ai cần mất tgian lắm.
  5. WeAreTheWorld

    WeAreTheWorld Thành viên mới

    Tham gia ngày:
    20/10/2006
    Bài viết:
    1.099
    Đã được thích:
    0
    Quên mất không hỏi, Tài liệu này còn dùng được không vậy?
  6. WeAreTheWorld

    WeAreTheWorld Thành viên mới

    Tham gia ngày:
    20/10/2006
    Bài viết:
    1.099
    Đã được thích:
    0
    2. Real-time digital signal processing
    http://www.mediafire.com/?5idd4glmegj
  7. WeAreTheWorld

    WeAreTheWorld Thành viên mới

    Tham gia ngày:
    20/10/2006
    Bài viết:
    1.099
    Đã được thích:
    0
    3. Independent Component Ana&sis
    Preface
    Independent component analysis (ICA) is a statistical and computational technique
    for revealing hidden factors that underlie sets of random variables, measurements, or
    signals. ICA defines a generative model for the observed multivariate data, which is
    typically given as a large database of samples. In the model, the data variables are
    assumed to be linear or nonlinear mixtures of some unknown latent variables, and
    the mixing system is also unknown. The latent variables are assumed nongaussian
    and mutually independent, and they are called the independent components of the
    observed data. These independent components, also called sources or factors, can be
    found by ICA.
    ICA can be seen as an extension to principal component analysis and factor
    analysis. ICA is a much more powerful technique, however, capable of finding the
    underlying factors or sources when these classic methods fail completely.
    The data analyzed by ICA could originate from many different kinds of application
    fields, including digital images and document databases, as well as economic
    indicators and psychometric measurements. In many cases, the measurements are
    given as a set of parallel signals or time series; the term blind source separation is used
    to characterize this problem. Typical examples are mixtures of simultaneous speech
    signals that have been picked up by several microphones, brain waves recorded by
    multiple sensors, interfering radio signals arriving at a mobile phone, or parallel time
    series obtained from some industrial process.
    The technique of ICA is a relatively new invention. It was for the first time introduced
    in early 1980s in the context of neural network modeling. In mid-1990s,
    some highly successful new algorithms were introduced by several research groups,
    together with impressive demonstrations on problems like the ****tail-party effect,
    where the individual speech waveforms are found from their mixtures. ICA became
    one of the exciting new topics, both in the field of neural networks, especially unsupervised
    learning, and more generally in advanced statistics and signal processing.
    Reported real-world applications of ICA on biomedical signal processing, audio signal
    separation, telecommunications, fault diagnosis, feature extraction, financial time
    series analysis, and data mining began to appear.
    Many articles on ICA were published during the past 20 years in a large number
    of journals and conference proceedings in the fields of signal processing, artificial
    neural networks, statistics, information theory, and various application fields. Several
    special sessions and workshops on ICA have been arranged recently [70, 348], and
    some e***ed collections of articles [315, 173, 150] as well as some monographs on
    ICA, blind source separation, and related subjects [105, 267, 149] have appeared.
    However, while highly useful for their intended readership, these existing texts typically
    concentrate on some selected aspects of the ICA methods only. In the brief
    scientific papers and book chapters, mathematical and statistical preliminaries are
    usually not included, which makes it very hard for a wider audience to gain full
    understanding of this fairly technical topic.
    A comprehensive and detailed text book has been missing, which would cover
    both the mathematical background and principles, algorithmic solutions, and practical
    applications of the present state of the art of ICA. The present book is intended to fill
    that gap, serving as a fundamental introduction to ICA.
    It is expected that the readership will be from a variety of disciplines, such
    as statistics, signal processing, neural networks, applied mathematics, neural and
    cognitive sciences, information theory, artificial intelligence, and engineering. Both
    researchers, students, and practitioners will be able to use the book. We have made
    every effort tomake this book self-contained, so that a reader with a basic background
    in college calculus, matrix algebra, probability theory, and statistics will be able to
    read it. This book is also suitable for a graduate level university course on ICA,
    which is facilitated by the exercise problems and computer assignments given in
    many chapters.
    Scope and contents of this book
    This book provides a comprehensive introduction to ICA as a statistical and computational
    technique. The emphasis is on the fundamental mathematical principles and
    basic algorithms. Much of the material is based on the original research conducted
    in the authorsõ?T own research group, which is naturally reflected in the weighting of
    the different topics. We give a wide coverage especially to those algorithms that are
    scalable to large problems, that is, work even with a large number of observed variables
    and data points. These will be increasingly used in the near future when ICA
    is extensively applied in practical real-world problems instead of the toy problems
    or small pilot studies that have been predominant until recently. Respectively, some
    what less emphasis is given to more specialized signal processing methods involving
    convolutivemixtures, delays, and other blind source separation techniques than ICA.
    As ICA is a fast growing research area, it is impossible to include every reported
    development in a textbook. We have tried to cover the central contributions by other
    workers in the field in their appropriate context and present an extensive bibliography
    for further reference. We apologize for any omissions of important contributions that
    we may have overlooked.
    For easier reading, the book is divided into four parts.
     Part I gives the mathematical preliminaries. It introduces the general mathematical
    concepts needed in the rest of the book. We start with a crash course
    on probability theory in Chapter 2. The reader is assumed to be familiar with
    most of the basic material in this chapter, but also some concepts more specific
    to ICA are introduced, such as higher-order cumulants and multivariate
    probability theory. Next, Chapter 3 discusses essential concepts in optimization
    theory and gradient methods, which are needed when developing ICA
    algorithms. Estimation theory is reviewed in Chapter 4. A complementary
    theoretical framework for ICA is information theory, covered in Chapter 5.
    Part I is concluded by Chapter 6, which discusses methods related to principal
    component analysis, factor analysis, and decorrelation.
    More confident readers may prefer to skip some or all of the introductory
    chapters in Part I and continue directly to the principles of ICA in Part II.
     In Part II, the basic ICA model is covered and solved. This is the linear
    instantaneous noise-freemixingmodel that is classic in ICA, and forms the core
    of the ICAtheory. The model is introduced and the question of identifiability of
    themixingmatrix is treated in Chapter 7. The following chapters treat different
    methods of estimating the model. A central principle is nongaussianity,whose
    relation to ICA is first discussed in Chapter 8. Next, the principles of maximum
    likelihood (Chapter 9) and minimum mutual information (Chapter 10) are
    reviewed, and connections between these three fundamental principles are
    shown. Material that is less suitable for an introductory course is covered
    in Chapter 11, which discusses the algebraic approach using higher-order
    cumulant tensors, and Chapter 12, which reviews the early work on ICA based
    on nonlinear decorrelations, as well as the nonlinear principal component
    approach. Practical algorithms for computing the independent components
    and the mixing matrix are discussed in connection with each principle. Next,
    some practical considerations, mainly related to preprocessing and dimension
    reduction of the data are discussed inChapter 13, including hints to practitioners
    on howto really apply ICAto their own problem. An overviewand comparison
    of the various ICA methods is presented in Chapter 14, which thus summarizes
    Part II.
     In Part III, different extensions of the basic ICAmodel are given. This part is by
    its nature more speculative than Part II, since most of the extensions have been
    introduced very recently, and many open problems remain. In an introductory
    course on ICA, only selected chapters from this part may be covered. First,
    in Chapter 15, we treat the problem of introducing explicit observational noise
    in the ICA model. Then the situation where there are more independent
    components than observed mixtures is treated in Chapter 16. In Chapter 17,
    the model is widely generalized to the case where the mixing process can be of
    a very general nonlinear form. Chapter 18 discusses methods that estimate a
    linear mixing model similar to that of ICA, but with quite different assumptions:
    the components are not nongaussian but have some time dependencies instead.
    Chapter 19 discusses the case where the mixing system includes convolutions.
    Further extensions, in particular models where the components are no longer
    required to be exactly independent, are given in Chapter 20.
     Part IV treats some applications of ICA methods. Feature extraction (Chapter
    21) is relevant to both image processing and vision research. Brain imaging
    applications (Chapter 22) concentrate on measurements of the electrical and
    magnetic activity of the human brain. Telecommunications applications are
    treated in Chapter 23. Some econometric and audio signal processing applications,
    together with pointers to miscellaneous other applications, are treated in
    Chapter 24.
    Throughout the book, we have marked with an asterisk some sections that are
    rather involved and can be skipped in an introductory course.
    Several of the algorithms presented in this book are available as public domain
    software through the World Wide Web, both on our own Web pages and those of
    other ICA researchers. Also, databases of real-world data can be found there for
    testing the methods. We have made a specialWeb page for this book, which contains
    appropriate pointers. The address is
    www.cis.hut.fi/projects/ica/book
    The reader is advised to consult this page for further information.
    This book was written in cooperation between the three authors. A. Hyvăarinen
    was responsible for the chapters 5, 7, 8, 9, 10, 11, 13, 14, 15, 16, 18, 20, 21, and 22;
    J. Karhunen was responsible for the chapters 2, 4, 17, 19, and 23; while E. Oja was
    responsible for the chapters 3, 6, and 12. The Chapters 1 and 24 were written jointly
    by the authors.
    Acknowledgments
    We are grateful to the many ICA researchers whose original contributions form the
    foundations of ICA and who have made this book possible. In particular, we wish to
    express our gratitude to the Series E***or Simon Haykin, whose articles and books on
    signal processing and neural networks have been an inspiration to us over the years.
    Some parts of this text are based on close cooperation with other members of our
    research group at the Helsinki University of Technology. Chapter 21 is largely based
    on joint work with Patrik Hoyer, who also made all the experiments in that chapter.
    Chapter 22 is based on experiments and material by Ricardo Vig´ario. Section 13.2.2
    is based on joint work with Jaakko Săarelăa and Ricardo Vig´ario. The experiments in
    Section 16.2.3 were provided by Razvan Cristescu. Section 20.3 is based on joint
    work with Ella Bingham, Section 14.4 on joint work with Xavier Giannakopoulos,
    and Section 20.2.3 on joint work with Patrik Hoyer and Mika Inki. Chapter 19 is
    partly based on material provided by Kari Torkkola. Much of Chapter 17 is based
    on joint work with Harri Valpola and Petteri Pajunen, and Section 24.1 is joint work
    with Kimmo Kiviluoto and Simona Malaroiu.
    Over various phases of writing this book, several people have kindly agreed to
    read and comment on parts or all of the text. We are grateful for this to Ella Bingham,
    Jean-Francáois Cardoso, Adrian Flanagan, Mark Girolami, Antti Honkela, Jarmo
    Hurri, Petteri Pajunen, Tapani Ristaniemi, and Kari Torkkola. Leila Koivisto helped
    in technical e***ing, while Antti Honkela, Mika Ilmoniemi, Merja Oja, and Tapani
    Raiko helped with some of the figures.
    Our original research work on ICA as well as writing this book has been mainly
    conducted at theNeuralNetworksResearchCentre of theHelsinkiUniversity of Technology,
    Finland. The research had been partly financed by the project õ?oBLISSõ? (European
    Union) and the project õ?oNew Information Processing Principlesõ? (Academy
    of Finland), which are gratefully acknowledged. Also, A. H. wishes to thank Găote
    Nyman and Jukka Hăakkinen of the Department of Psychology of the University of
    Helsinki who hosted his civilian service there and made part of the writing possible.
    AAPO HYVAă RINEN, JUHA KARHUNEN, ERKKI OJA
    Espoo, Finland
    March 2001
    http://www.mediafire.com/?7jl1hturmtn
  8. WeAreTheWorld

    WeAreTheWorld Thành viên mới

    Tham gia ngày:
    20/10/2006
    Bài viết:
    1.099
    Đã được thích:
    0
    4) Tracking and Kalman filtering made easy
    PREFACE
    At last a book that hopefully will take the mystery and drudgery out of the g?"h,
    ?", g?"h?"k, ?"?" and Kalman filters and makes them a joy. Many books
    written in the past on this subject have been either geared to the tracking filter
    specialist or difficult to read. This book covers these filters from very simple
    physical and geometric approaches. Extensive, simple and useful design
    equations, procedures, and curves are presented. These should permit the reader
    to very quickly and simply design tracking filters and determine their
    performance with even just a pocket calculator. Many examples are presented
    to give the reader insight into the design and performance of these filters.
    Extensive homework problems and their solutions are given. These problems
    form an integral instructional part of the book through extensive numerical
    design examples and through the derivation of very key results stated without
    proof in the text, such as the derivation of the equations for the estimation of the
    accuracies of the various filters [see Note (1) on page 388]. Covered also in
    simple terms is the least-squares filtering problem and the orthonormal
    transformation procedures for doing least-squares filtering.
    The book is intended for those not familiar with tracking at all as well as for
    those familiar with certain areas who could benefit from the physical insight
    derived from learning how the various filters are related, and for those who are
    specialists in one area of filtering but not familiar with other areas covered. For
    example, the book covers in extremely simple physical and geometric terms the
    Gram?"Schmidt, Givens, and Householder orthonormal transformation procedures
    for doing the filtering and least-square estimation problem. How these
    procedures reduce sensitivity to computer round-off errors is presented. A
    simple explanation of both the classical and modified Gram?"Schmidt procedures
    is given. Why the latter is less sensitive to round-off errors is explained in
    physical terms. For the first time the discrete-time orthogonal Legendre
    polynomial (DOLP) procedure is related to the voltage-processing procedures.
    Important real-world issues such as how to cope with clutter returns,
    elimination of redundant target detections (observation-merging or clustering),
    e***ing for inconsistent data, track-start and track-drop rules, and data
    association (e.g., the nearest-neighbor approach and track before detection)
    are covered in clear terms. The problem of tracking with the very commonly
    used chirp waveform (a linear-frequency-modulated waveform) is explained
    simply with useful design curves given. Also explained is the important
    moving-target detector (MTD) technique for canceling clutter.
    The Appendix gives a comparison of the Kalman filter (1960) with the
    Swerling filter (1959). This Appendix is written by Peter Swerling. It is time for
    him to receive due cre*** for his contribution to the ?~?~Kalman?"Swerling?T?T filter.
    The book is intended for home study by the practicing engineer as well as for
    use in a course on the subject. The author has successfully taught such a course
    using the notes that led to this book. The book is also intended as a design
    reference book on tracking and estimation due to its extensive design curves,
    tables, and useful equations.
    It is hoped that engineers, scientists, and mathematicians from a broad range
    of disciplines will find the book very useful. In ad***ion to covering and relating
    the g?"h, ?", g?"h?"k, ?"?", Kalman filters, and the voltage-processing
    methods for filtering and least-squares estimation, the use of the voltageprocessing
    methods for sidelobe canceling and adaptive-array processing are
    explained and shown to be the same mathematically as the tracking and
    estimated problems. The massively parallel systolic array sidelobe canceler
    processor is explained in simple terms. Those engineers, scientists, and
    mathematicians who come from a mathematical background should get a good
    feel for how the least-squares estimation techniques apply to practical systems
    like radars. Explained to them are matched filtering, chirp waveforms, methods
    for dealing with clutter, the issue of data association, and the MTD clutter
    rejection technique. Those with an understanding from the radar point of view
    should find the explanation of the usually very mathematical Gram?"Schmidt,
    Givens, and Householder voltage-processing (also called square-root) techniques
    very easy to understand. Introduced to them are the important concepts of
    ill-con***ioning and computational accuracy issues. The classical Gram?"
    Schmidt and modified Gram?"Schmidt procedures are covered also, as well as
    why one gives much more accurate results. Hopefully those engineers,
    scientists, and mathematicians who like to read things for their beauty will
    find it in the results and relationships given here. The book is primarily intended
    to be light reading and to be enjoyed. It is a book for those who need or want to
    learn about filtering and estimation but prefer not to plow through difficult
    esoteric material and who would rather enjoy the experience. We could have
    called it ?~?~The Joy of Filtering.?T?T
    The first part of the text develops the g?"h, g?"h?"k, ?", ?"?", and
    Kalman filters. Chapter 1 starts with a very easy heuristic development of g?"h
    filters for a simple constant-velocity target in ?~?~lineland?T?T (one-dimensional
    space, in contrast to the more complicated two-dimensional ?~?~flatland?T?T).
    Section 1.2.5 gives the g?"h filter, which minimizes the transient error resulting
    from a step change in the target velocity. This is the well-known Benedict?"
    Bordner filter. Section 1.2.6 develops the g?"h filter from a completely different,
    common-sense, physical point of view, that of least-squares fitting a straight
    line to a set of range measurements. This leads to the critically damped (also
    called discounted least-squares and fading-memory) filter. Next, several
    example designs are given. The author believes that the best way to learn a
    subject is through examples, and so numerous examples are given in Section
    1.2.7 and in the homework problems at the end of the book.
    Section 1.2.9 gives the con***ions (on g and h) for a g?"h filter to be stable
    (these con***ions are derived in problem 1.2.9-1). How to initiate tracking with
    a g?"h filter is covered in Section 1.2.10. A filter (the g?"h?"k filter) for tracking a
    target having a constant acceleration is covered in Section 1.3. Coordinate
    selection is covered in Section 1.5.
    The Kalman filter is introduced in Chapter 2 and related to the Benedict?"
    Bordner filter, whose equations are derived from the Kalman filter in Problem
    2.4-1. Reasons for using the Kalman filter are discussed in Section 2.2, while
    Section 2.3 gives a physical feel for how the Kalman filter works in an optimum
    way on the data to give us a best estimate. The Kalman filter is put in matrix
    form in Section 2.4, not to impress, but because in this form the Kalman filter
    applies way beyond lineland?"to multidimensional space.
    Section 2.6 gives a very simple derivation of the Kalman filter. It requires
    differentiation of a matrix equation. But even if you have never done
    differentiation of a matrix equation, you will be able to follow this derivation.
    In fact, you will learn how to do matrix differentiation in the process! If
    you had this derivation back in 1958 and told the world, it would be your
    name filter instead of the Kalman filter. You would have gotten the IEEE
    Medal of Honor and $20,000 tax-free and the $340,000 Kyoto Prize,
    equivalent to the Nobel Prize but also given to engineers. You would be world
    famous.
    In Section 2.9 the Singer g?"h?"k Kalman filter is explained and derived.
    Extremely useful g?"h?"k filter design curves are presented in Section 2.10
    together with an example in the text and many more in Problems 2.10-1 through
    2.10-17. The issues of the selection of the type of g?"h filter is covered in
    Section 2.11.
    Chapter 3 covers the real-world problem of tracking in clutter. The use of the
    track-before-detect retrospective detector is described (Section 3.1.1). Also
    covered is the important MTD clutter suppression technique (Section 3.1.2.1).
    Issues of eliminating redundant detections by observation merging or clustering
    are covered (Section 3.1.2.2) as well as techniques for e***ing out inconsistent
    data (Section 3.1.3), combining clutter suppression with track initiation
    (Section 3.1.4), track-start and track-drop rules (Section 3.2), data association
    (Section 3.3), and track-while-scan systems (Section 3.4).
    In Section 3.5 a tutorial is given on matched filtering and the very commonly
    used chirp waveform. This is followed by a discussion of the range bias error
    problem associated with using this waveform and how this bias can be used to
    advantage by choosing a chirp waveform that predicts the futurê?"a fortunetelling
    radar.
    The second part of the book covers least-squares filtering, its power and
    voltage-processing approaches. Also, the solution of the least-squares filtering
    problem via the use of the DOLP technique is covered and related to voltageprocessing
    approaches. Another simple derivation of the Kalman filter is
    presented and ad***ional properties of the Kalman filter given. Finally, how to
    handle nonlinear measurement equations and nonlinear equations of motion are
    discussed (the extended Kalman filter).
    Chapter 4 starts with a simple formulation of the least-squares estimation
    problem and gives its power method solution, which is derived both by simple
    differentiation (Section 4.1) and by simple geometry considerations (Section
    4.2). This is followed by a very simple explanation of the Gram?"Schmidt
    voltage-processing (square-root) method for solving the least-squares problem
    (Section 4.3). The voltage-processing approach has the advantage of being
    much less sensitive to computer round-off errors, with about half as many bits
    being required to achieve the same accuracy. The voltage-processing approach
    has the advantage of not requiring a matrix inverse, as does the power method.
    In Section 4.4, it is shown that the mathematics for the solution of the
    tracking least-squares problem is identical to that for the radar and
    communications sidelobe canceling and adaptive nulling problems. Furthermore,
    it is shown how the Gram?"Schmidt voltage-processing approach can be
    used for the sidelobe canceling and adaptive nulling problem.
    Often the accuracy of the measurements of a tracker varies from one time to
    another. For this case, in fitting a trajectory to the measurements, one would like
    to make the trajectory fit closer to the accurate data. The minimum-variance
    least-squares estimate procedure presented in Section 4.5 does this. The more
    accurate the measurement, the closer the curve fit is to the measurement.
    The fixed-memory polynomial filter is covered in Chapter 5. In Section 5.3
    the DOLP approach is applied to the tracking and least-squares problem for
    the important cases where the target trajectory or data points (of which there
    are a fixed number L þ 1) are approximated by a polynomial fit of some
    degree m. This method also has the advantage of not requiring a matrix
    inversion (as does the power method of Section 4.1). Also, its solution is
    much less sensitive to computer round-off errors, half as many bits being
    required by the computer.
    The convenient and useful representation of the polynomial fit of degree m in
    terms of the target equation motion derivatives (first m derivatives) is given in
    Section 5.4. A useful general solution to the DOLP least-squares estimate for a
    polynomial fit that is easily solved on a computer is given in Section 5.5.
    Sections 5.6 through 5.10 present the variance and bias errors for the leastsquares
    solution and discusses how to balance these errors. The important
    method of trend removal to lower the variance and bias errors is discussed in
    Section 5.11.
    In Chapter 5, the least-squares solution is based on the assumption of a fixed
    number L þ 1 of measurements. In this case, when a new measurement is made,
    the oldest measurement is dropped in order to keep the number measurements
    on which the trajectory estimate is based equal to the fixed number L þ 1. In
    Chapter 6 we consider the case when a new measurement is made, we no longer
    throw away the oldest data. Such a filter is called a growing-memory filter.
    Specifically, an mth-degree polynomial is fitted to the data set, which now
    grows with time, that is, L increases with time. This filter is shown to lead to the
    easy-to-use recursive growing-memory g?"h filter used for track initiation in
    Section 1.2.10. The recursive g?"h?"k (m ¼ 2) and g?"h?"k?"l (m ¼ 3) versions of
    this filter are also presented. The issues of stability, track initiation, root-meansquare
    (rms) error, and bias errors are discussed.
    In Chapter 7 the least-squares polynomial fit to the data is given for the case
    where the error of the fit is allowed to grow the older the data. In effect, we pay
    less and less attention to the data the older it is. This type of filter is called a
    fading-memory filter or discounted least-squares filter. This filter is shown to
    lead to the useful recursive fading-memory g?"h filter of Section 1.2.6 when the
    polynomial being fitted to is degree m ¼ 1. Recursive versions of this filter that
    apply to the case when the polynomial being fitted has degree m ¼ 2, 3, 4 are
    also given. The issues of stability, rms error, track initiation, and equivalence to
    the growing-memory filters are also covered.
    In Chapter 8 the polynomial description of the target dynamics is given in
    terms of a linear vector differential equation. This equation is shown to be very
    useful for obtaining the transition matrix for the target dynamics by either
    numerical integration or a power series in terms of the matrix coefficient of the
    differential equation.
    In Chapter 9 the Bayes filter is derived (Problem 9.4-1) and in turn from it
    the Kalman filter is again derived (Problem 9.3-1). In Chapters 10 through 14
    the voltage least-squares algorithms are revisited. The issues of sensitivity to
    computer round-off error in obtaining the inverse of a matrix are elaborated in
    Section 10.1. Section 10.2 explains physically why the voltage least-squares
    algorithm (square-root processing) reduces the sensitivity to computer roundoff
    errors. Chapter 11 describes the Givens orthonormal transformation voltage
    algorithm. The massively parallel systolic array implementation of the Givens
    algorithm is detailed in Section 11.3. This implementation makes use of the
    CORDIC algorithm used in the Hewlett-Packard hand calculators for
    trigonometric computations.
    The Householder orthonormal transformation voltage algorithm is described
    in Chapter 12. The Gram?"Schmidt orthonormal transformation voltage
    algorithm is revisited in Chapter 13, with classical and modified versions
    explained in simple terms. These different voltage least-squares algorithms are
    compared in Section 14.1 and to QR decomposition in Section 14.2. A recursive
    version is developed in Section 14.3. Section 14.4 relates these voltage-
    processing orthonormal transformation methods to the DOLP approach used in
    Section 5.3 for obtaining a polynomial fit to data. The two methods are shown
    to be essentially identical. The square-root Kalman filter, which is less sensitive
    to round-off errors, is discussed in Section 14.5.
    Up until now the deterministic part of the target model was assumed to be
    time invariant. For example, if a polynomial fit of degree m was used for the
    target dynamics, the coefficients of this polynomial fit are constant with time.
    Chapter 15 treats the case of time-varying target dynamics.
    The Kalman and Bayes filters developed up until now depend on the
    observation scheme being linear. This is not always the situation. For example,
    if we are measuring the target range R and azimuth angle  but keep track of the
    target using the east-north x, y coordinates of the target with a Kalman filter,
    then errors in the measurement of R and  are not linearly related to the
    resulting error in x and y because
    x ¼ R cos  ð1z
    and
    y ¼ R sin  ð2z
    where  is the target angle measured relative to the x axis. Section 16.2 shows
    how to simply handle this situation. Basically what is done is to linearize
    Eqs. (1) and (2) by using the first terms of a Taylor expansion of the inverse
    equations to (1) and (2) which are
    R ¼ ffixffiffi2ffiffiffiffiffiffiffiffiffiffiffiffi p þ y2 ð3z
     ¼ tan
    y
    x ð4z
    Similarly the equations of motion have to be linear to apply the Kalman?"
    Bayes filters. Section 16.3 describes how a nonlinear equation of motion can be
    linearized, again by using the first term of a Taylor expansion of the nonlinear
    equations of motion. The important example of linearization of the nonlinear
    observation equations obtained when observing a target in spherical coordinates
    (R, , ) while tracking it in rectangular (x, y, z) coordinates is given. The
    example of the linearization of the nonlinear target dynamics equations
    obtained when tracking a projectile in the atmosphere is detailed. Atmospheric
    drag on the projectile is factored in.
    In Chapter 17 the technique for linearizing the nonlinear observation
    equations and dynamics target equations in order to apply the recursive Kalman
    and Bayes filters is detailed. The application of these linearizations to a
    nonlinear problem in order to handle the Kalman filter is called the extended
    Kalman filter. It is also the filter Swerling originally developed (without the
    target process noise). The Chapter 16 application of the tracking of a ballistic
    projectile through the atmosphere is again used as an example.
    The form of the Kalman filter given in Kalman?Ts original paper is different
    from the forms given up until now. In Chapter 18 the form given until now is
    related to the form given by Kalman. In ad***ion, some of the fundamental
    results given in Kalman?Ts original paper are summarized here.
    ELI BROOKNER
    Sudbury, MA
    January 1998
    http://www.mediafire.com/?54dedttcme1
  9. hanoian142

    hanoian142 Thành viên mới

    Tham gia ngày:
    06/10/2003
    Bài viết:
    167
    Đã được thích:
    0
    Bác cho xin cuốn số 6 và 80 :)
    6) Fibre optic communication systems
    80) Fundamentals of photonics
    Vote bác 5* gọi là cảm ơn trước
  10. WeAreTheWorld

    WeAreTheWorld Thành viên mới

    Tham gia ngày:
    20/10/2006
    Bài viết:
    1.099
    Đã được thích:
    0
    Cuốn 6 này:
    http://www.mediafire.com/?5vblowkovnh
    6) Fibre optic communication systems
    Preface
    Since the publication of the first e***ion of this book in 1992, the state of the art of
    fiber-optic communication systems has advanced dramatically despite the relatively
    short period of only 10 years between the first and third e***ions. For example, the
    highest capacity of commercial fiber-optic links available in 1992 was only 2.5 Gb/s.
    A mere 4 years later, the wavelength-division-multiplexed (WDM) systems with the
    total capacity of 40 Gb/s became available commercially. By 2001, the capacity of
    commercial WDM systems exceeded 1.6 Tb/s, and the prospect of lightwave systems
    operating at 3.2 Tb/s or more were in sight. During the last 2 years, the capacity
    of transoceanic lightwave systems installed worldwide has exploded. Moreover, several
    other undersea networks were in the construction phase in December 2001. A
    global network covering 250,000 km with a capacity of 2.56 Tb/s (64 WDM channels
    at 10 Gb/s over 4 fiber pairs) is scheduled to be operational in 2002. Several conference
    papers presented in 2001 have demonstrated that lightwave systems operating at a bit
    rate of more than 10 Tb/s are within reach. Just a few years ago it was unimaginable
    that lightwave systems would approach the capacity of even 1 Tb/s by 2001.
    The second e***ion of this book appeared in 1997. It has been well received by
    the scientific community involved with lightwave technology. Because of the rapid advances
    that have occurred over the last 5 years, the publisher and I deemed it necessary
    to bring out the third e***ion if the book were to continue to provide a comprehensive
    and up-to-date account of fiber-optic communication systems. The result is in your
    hands. The primary objective of the book remains the same. Specifically, it should be
    able to serve both as a textbook and a reference monograph. For this reason, the emphasis
    is on the physical understanding, but the engineering aspects are also discussed
    throughout the text.
    Because of the large amount of material that needed to be added to provide comprehensive
    coverage, the book size has increased considerably compared with the first
    e***ion. Although all chapters have been updated, the major changes have occurred in
    Chapters 6?"9. I have taken this opportunity to rearrange the material such that it is better
    suited for a two-semester course on optical communications. Chapters 1?"5 provide
    the basic foundation while Chapters 6?"10 cover the issues related to the design of advanced
    lightwave systems. More specifically, after the introduction of the elementary
    concepts in Chapter 1, Chapters 2?"4 are devoted to the three primary components of a
    fiber-optic communications?"optical fibers, optical transmitters, and optical receivers.
    Chapter 5 then focuses on the system design issues. Chapters 6 and 7 are devoted to
    the advanced techniques used for the management of fiber losses and chromatic dis-
    persion, respectively. Chapter 8 focuses on the use of wavelength- and time-division
    multiplexing techniques for optical networks. Code-division multiplexing is also a part
    of this chapter. The use of optical solitons for fiber-optic systems is discussed in Chapter
    9. Coherent lightwave systems are now covered in the last chapter. More than 30%
    of the material in Chapter 6?"9 is new because of the rapid development of the WDM
    technology over the last 5 years. The contents of the book reflect the state of the art of
    lightwave transmission systems in 2001.
    The primary role of this book is as a graduate-level textbook in the field of optical
    communications. An attempt is made to include as much recent material as possible
    so that students are exposed to the recent advances in this exciting field. The book can
    also serve as a reference text for researchers already engaged in or wishing to enter
    the field of optical fiber communications. The reference list at the end of each chapter
    is more elaborate than what is common for a typical textbook. The listing of recent
    research papers should be useful for researchers using this book as a reference. At
    the same time, students can benefit from it if they are assigned problems requiring
    reading of the original research papers. A set of problems is included at the end of
    each chapter to help both the teacher and the student. Although written primarily for
    graduate students, the book can also be used for an undergraduate course at the senior
    level with an appropriate selection of topics. Parts of the book can be used for several
    other related courses. For example, Chapter 2 can be used for a course on optical
    waveguides, and Chapter 3 can be useful for a course on optoelectronics.
    Many universities in the United States and elsewhere offer a course on optical communications
    as a part of their curriculum in electrical engineering, physics, or optics. I
    have taught such a course since 1989 to the graduate students of the Institute of Optics,
    and this book indeed grew out of my lecture notes. I am aware that it is used as a textbook
    by many instructors worldwide?"a fact that gives me immense satisfaction. I am
    acutely aware of a problem that is a side effect of an enlarged revised e***ion. How can
    a teacher fit all this material in a one-semester course on optical communications? I
    have to struggle with the same question. In fact, it is impossible to cover the entire book
    in one semester. The best solution is to offer a two-semester course covering Chapters
    1 through 5 during the first semester, leaving the remainder for the second semester.
    However, not many universities may have the luxury of offering a two-semester course
    on optical communications. The book can be used for a one-semester course provided
    that the instructor makes a selection of topics. For example, Chapter 3 can be skipped
    if the students have taken a laser course previously. If only parts of Chapters 6 through
    10 are covered to provide students a glimpse of the recent advances, the material can
    fit in a single one-semester course offered either at the senior level for undergraduates
    or to graduate students.
    This e***ion of the book features a compact disk (CD) on the back cover provided
    by the Optiwave Corporation. The CD contains a state-of-the art software package
    suitable for designing modern lightwave systems. It also contains ad***ional problems
    for each chapter that can be solved by using the software package. Appendix E provides
    more details about the software and the problems. It is my hope that the CD will help
    to train the students and will prepare them better for an industrial job.
    A large number of persons have contributed to this book either directly or indirectly.
    It is impossible to mention all of them by name. I thank my graduate students and the
    students who took my course on optical communication systems and helped improve
    my class notes through their questions and comments. Thanks are due to many instructors
    who not only have adopted this book as a textbook for their courses but have also
    pointed out the misprints in previous e***ions, and thus have helped me in improving
    the book. I am grateful to my colleagues at the Institute of Optics for numerous discussions
    and for providing a cordial and productive atmosphere. I appreciated the help
    of Karen Rolfe, who typed the first e***ion of this book and made numerous revisions
    with a smile. Last, but not least, I thank my wife, Anne, and my daughters, Sipra,
    Caroline, and Claire, for understanding why I needed to spend many weekends on the
    book instead of spending time with them.
    Govind P. Agrawal
    Rochester, NY
    December 2001
    Còn cuốn 80 to quá, chờ 1 - 2 hôm nữa nhé.

Chia sẻ trang này