Adaptive Filters

Free download. Book file PDF easily for everyone and every device. You can download and read online Adaptive Filters file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Adaptive Filters book. Happy reading Adaptive Filters Bookeveryone. Download file Free Book PDF Adaptive Filters at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Adaptive Filters Pocket Guide.

The coefficients of the adaptive filter are updated by the help of the least mean square LMS algorithm. First, the wavelet transform based adaptive filter WTAF is described and it is analyzed for its Wiener optimal solution. Then the performance of the WTAF is studied by the help of learning curves for three different convergence factors: 1 constant convergence factor, 2 time-varying convergence factor, and 3 exponentially weighted convergence factor.

Post navigation

The exponentially weighted convergence factor is proposed to introduce scale-based variation to the weight update equation. It is shown for two different sets of data that the rate of convergence increases significantly for all three WTAF structures as compared to that of time-domain LMS.

Adaptive LMS Filter in MATLAB

The high convergence rates of the WTAF give us reason to expect that it will perform well in tracking rapid changes in a signal. However, due to physical reasons, the error signal must be real-valued and therefore a complex-valued LMS algorithm is run only by a real-valued error fraction. In [ 21 ], it was shown that this variant behaves indeed in a robust way while an alternative variant employing a complex-valued error and a real-valued regressor does not.

Both variants show identical MSE behavior though.

Basic Information

After so much disturbing news on potential instability, it is time now to take a closer look into the stability of adaptive filters as we need to understand the various notions of stability. Depending on the application, a minimal remaining error energy can be desired signal adaptation , but also the correct knowledge of the parameters w may be desired system adaptation. Due to this procedure, MSE-stability always includes some form of convergence. If an additive stationary noise process v k is assumed, the algorithm converges into a nonzero steady-state.

In the context of adaptive filters, it was introduced in by Kailath, Sayed, and Hassibi [ 29 ]. Further work over the next 10 years [ 25 , 30 — 33 ] showed that more and more adaptive filters exhibit such property. The driving sequence u k only influences the algorithmic mapping from input to output. Gradient-type method described as feedback structure with allpass lossless in the forward path and lossy feedback path. As the stability result depends on the small-gain theorem [ 27 , 28 ], the resulting step-size bound is conservative.

While for the classic LMS algorithm, the observation coincides very sharply with the predicted bounds, for many other algorithms, the bound obtained is indeed conservative. For gradient-type algorithms it was concluded that, if the noise sequence compensates the undistorted error, i. If the latter is also required, the driving signal vectors u k need to be persistent exciting, i. If the input sequence is a Cauchy sequence, so is the output. Convergence in this context means that a range of step-size parameters or alternative design parameters exist, for which even under worst-case sequences, no divergence occurs.

We can conclude that for bounded random sequences, l 2 -stability leads to MSE-stability but not to the opposite. The robustness framework was even able to handle such different algorithms as the Gauss-Newton-type Algorithm 6, of which the recursive least squares RLS algorithm is its most famous special form, but also single-layer neural network adaptations.

Until here, the occurrence of a linear filter in the error path may have been regarded as some curiosity in the many variations of adaptive-filter algorithms and applications that simply set an exemplary exception, requiring a different treatment while the majority of adaptive-filter algorithms work accurately according to an MSE-based theory. The developed robustness description allows to define stability conditions for all those cases very accurately.

Back to our historical walk. In the s, adaptive filters for neural networks and particular fast versions of LS techniques were in the focus, so called Fast-RLS algorithms. To include them in practical applications, their LS nature was often sacrificed, and time-variant step-sizes were introduced. With such step-sizes, however, their nature was more along the stochastic, gradient-type algorithms. One of these RLS derivatives is the affine projection AP algorithm [ 38 ] that speeds up convergence when compared to its simpler gradient counterpart by taking P past regression directions into account.

A fast version of this [ 39 , 40 ] is the basis for millions of copies of such algorithms running today in electric echo cancellation devices to reduce the echoes of long-distance call telephone cables. Unlike the original algorithm, they use a sophisticated step-size control to prevent instable behavior in double-talk situations [ 41 ], that is when both talkers are active. The resulting algorithm is called pseudo affine projection PAP algorithm, see Algorithm 8, as with a moderate step-size the original property is lost.

However, this is not the only algorithm for which stability problems remained undiscovered for a long time. A well-known adaptive algorithm for zero forcing ZF equalization is the gradient algorithm by Lucky [ 43 ], see Algorithm 9.


  1. Adaptive filter;
  2. Mutineers Moon (Dahak Series).
  3. Letters and Cultural Transformations in the United States, 1760-1860?
  4. Solid State Physics - Vol. Supplement 2.
  5. Navigation Bar.
  6. References?

In the well-known text book by Proakis [ 44 ] we can read:. That is, it possesses a global minimum and no relative minima. The argumentation sounded very convincing until the algorithmic behavior was analyzed throughly in [ 45 ] and it was found that indeed there exist channel conditions and data sequences that cause the algorithm to diverge, even for smallest step-sizes. See also [ 46 ] for alternative non-robust ZF equalizer algorithms.

Such examples may corroborate the suspicion that they all may be related to a linear filter in the error path of some form and thus depend on an SPR condition. In the meantime, they were, however, correctly analyzed by the now existing robustness techniques [ 42 , 45 ]. Moreover, also other problems can cause stability trouble, when the driving signal is of persistent excitation. Once we consider algorithms with matrix inverses such as RLS algorithms, it is well understood that with a lack of persistent excitation, a null space in the solution opens up that offers the algorithm a wide space to diverge.

Also in applications, such as stereo hands-free telephones [ 47 ], null-spaces can occur as part of the solution and cause adaptive filters to diverge. In such cases, regularization and leakage factors are often applied to force the null spaces out of the obtained estimates.

Adaptive Filters

But the existence of null-spaces is not necessarily a reason for a lack of robustness. It may thus come as a surprise that there exists an adaptive algorithm for blind channel estimation [ 48 , 49 ] that is indeed robust [ 46 , 50 ], although it is known that most of the blind methods are non-robust [ 51 ]; see Algorithm 10 for details. It is the classical two-channel path setup as depicted in Fig. This result, however, does not mean that all blind algorithms are robust; some more comments on this topic are provided in Section 8.

While for many known adaptive filters, it was now possible to show robustness conditions; for some of them, the problem remained unsolved as they cannot be brought into the feedback structure as depicted in Fig. Typically, these problematic adaptive filters employ the general update form:.

CiteSeerX — Adaptive Filters

This condition is—similar to the small-gain theorem that was applied before—a conservative condition. However, due to the linear operators involved, it is now simpler to analyze converse conditions, i. If observation noise is added again, compared to 1 , now, conditions of the form. However, the so obtained bounds appear to be tighter or equivalent when compared to the previous ones based on robustness. A good first example is the LMS algorithm, i. Due to the conservatism of the small-gain theorem, we cannot answer this. The stability bound of the LMS algorithm is thus tight as both methods deliver the same bound.

While the SVD-based method provides an identical bound in this case; in many other algorithms, larger bounds could be identified. Note, however, that the condition of having a singular value larger than one and thus the loss of robustness does not necessarily mean that the system must behave in an unstable manner. In order to cause instability, the driving signal must ensure that with every update step or the majority of update steps , the condition is violated and not just once.

As sometimes additional constraints are imposed to the adaptive filter, this potential worst-case sequence may not exist, and the algorithm may behave in a robust manner although one singular value is larger than one.

Overview of Adaptive Filters and Applications

Limitations of worst-case sequences typically occur with additional constraints due to the filter application and structure. A linear combiner with input u k from an arbitrary alphabet is very likely to cause a worst-case sequence leading to divergence, while an adaptive filter of FIR structure allows only one degree of freedom per iteration, as all other elements of the update vectors are already given. If the driving sequence is further restricted by originating from a limited alphabet, say binary phase shift keying BPSK , it can very well happen that a singular value larger than one exists, but the excitation can never work in direction of its corresponding vector.

This brings us to the question of systematically finding worst-case sequences. For the above mentioned example, this is equivalent to requiring. Indeed for arbitrary vectors x k , u k , it is relatively straightforward to find such sequences and prove divergence. Originally derived by Duttweiler [ 53 ] in , the algorithm can be viewed as a time-variant counterpart of the algorithm by Makino [ 54 ]; both variants are shown in Algorithm During the next 10 years the algorithm became very popular as a clever control of the diagonal step-size matrix can cause a significant speed up of the algorithm [ 55 ].

This can easily be shown as the product of consecutive matrices B k is equivalent to Eq. The asymmetric form of matrix B k can thus be made symmetric and standard theory can be applied. Duttweiler replaced L by a time-variant diagonal matrix L k for which such symmetry correction in the style of 5 does not work any more.

He showed his algorithm to be mean-square convergent. First attempts for showing robustness, however, turned out to require further rather limiting conditions on L k [ 56 ]. In [ 57 ], it finally is shown that the PNLMS algorithm can indeed become non-robust even if the positive definite entries of L k are fluctuating only little.


  • Recommended For You?
  • Adaptive Filtering Primer with MATLAB.
  • Submission history;
  • We are further interested in understanding the stability of adaptive filters with asymmetric update forms. To this end, we apply the SVD-based method to so-called linearly-coupled adaptive filters where two adaptive filters use linear combinations of their error terms, that is. Such a coupling may be undesired and caused by implementation or desired to achieve particular convergence properties [ 58 ].

    Figure 4 depicts the setup. This structure turns out to be the vehicle to analyze cascaded and partitioned adaptive algorithms and is thus of high interest. In a partitioned algorithm, the input vector is split up into one or more sections that run with a different individual step-size.

    We split the entire parameter-error vector into two parts, say g and h , and correspondingly we use two partitions, say u k and x k , as regression vectors. Even if the two step-sizes are not identical, the update error is still linearly dependent for both partitions, causing one singular value to be larger than one, thus violating robustness.

    Only the weaker MSE-stability remains. In the following, we demonstrate this behavior on a simple example in which we first run the PNLMS algorithm with worst-case sequences rather than random sequences. We choose a diagonal matrix. If we further add the error terms, we recognize that. While for a fixed matrix, the system mismatch has some potential to grow initially, it cannot keep growing and runs into a steady-state.

    Note that for bipolar driving signals as well as FIR filter structures, the worst-case sequences are relatively simply found as only a finite space has to be exhaustively searched through. As we recognize in Fig. Adaptive filters are classified into two main groups: linear, and non linear. Linear adaptive filters compute an estimate of a desired response by using a linear combination of the available set of observables applied to the input of the filter. Otherwise, the adaptive filter is said to be nonlinear.