暂无图片
暂无图片
暂无图片
暂无图片
暂无图片
Steady-State_Mean-Square_Error_Analysis_for_Non-Negative_Least_Lncosh_Algorithm.pdf
86
5页
1次
2024-05-07
免费下载
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: EXPRESS BRIEFS, VOL. 68, NO. 6, JUNE 2021 2237
Steady-State Mean-Square Error Analysis for
Non-Negative Least Lncosh Algorithm
Zeyang Sun, Yingsong Li , Senior Member, IEEE, Yibing Li, Tao Jiang, Member, IEEE,andWeiSun
Abstract—Lncosh cost function is regarded as a natural log-
arithm obtaining from a hyperbolic cosine function, which has
drawn growing attention due to its robust to impulsive noise. In
this brief, a nonnegative adaptive algorithm is proposed based on
lncosh function, named as NNLlncosh, which is derived by incor-
porating the nonnegativity constraint into lncosh cost function to
deal with a nonnegativity constraint optimization problem under
impulsive system noises. The steady-state excess mean square
error (EMSE) for the newly constructed NNLlncosh algorithm
is presented. Obtained results from the computer simulation
validate the theoretical analysis and verify the excellent char-
acteristics of the NNLlncosh over various non-Gaussian system
noises.
Index Terms—Adaptive filters, nonnegativity constraints,
lncosh cost function, impulsive noise, steady-state performance.
I. INTRODUCTION
C
ONSTRAINT adaptive filter algorithm has been used for a
wide scope of application such as remote sensing [1]–[3],
image processing [4], [5], antenna array processing [6], [7],
channel estimation [8]. The unique error-correcting ability can
help the constrained adaptive filters to prevent the accumulation
of errors, thereby avoiding physically absurd and unexplain-
able results [9]. The constraints usually are deterministic and
constructed from the prior knowledge of the systems on hand.
Among the extensively used constraints of adaptive filters, non-
negativity constraint is a frequent used constraint encountered
in real-world [10], for instance, intensities [11], chemical con-
centrations [12], material fractions of abundance [13], [14],
which need to satisfy the nonnegativity constraint.
As a famous nonnegativity constraint adaptive filtering algo-
rithm, the nonnegative least mean square (NNLMS) were
proposed and becomes a powerful tool tailored for online identi-
fying systems using nonnegativity constraint [15]. Although the
NNLMS shows excellent performance, it suffers from limited
convergence performance owing to unbalanced convergence
rates. Thus, a great many of modified NNLMS algorithms were
developed [16], [17]. For example, the inversely proportional
Manuscript received December 15, 2020; accepted December 28, 2020.
Date of publication January 1, 2021; date of current version May 27, 2021.
This work was partially supported in part by the National Key Research and
Development Program of China (2016YFE0111100), in part by the Science
and Technology innovative Talents Foundation of Harbin (2016RAXXJ044),
and in part by the Key Research and Development Program of Heilongjiang
(GX17A016). This brief was recommended by Associate Editor G. J. Dolecek.
(Corresponding author: Yingsong Li.)
Zeyang Sun, Yingsong Li, Yibing Li, and Tao Jiang are with the College of
Information and Communication Engineering, Harbin Engineering University,
Harbin 150001, China (e-mail: liyingsong@ieee.org).
Wei Sun is with the Mobile Interconnection HW Department, ZTE Research
and Development Center (Xi’an), Xi’an 518057, China.
Color versions of one or more figures in this article are available at
https://doi.org/10.1109/TCSII.2020.3048287.
Digital Object Identifier 10.1109/TCSII.2020.3048287
NNLMS (IP-NNLMS) algorithm has been developed in [16]
to address the NNLMS’s limitations. Besides, the exponen-
tial NNLMS (Exp-NNLMS) algorithm developed in [17] can
alleviate the imbalance of convergence speed.
Till now, the mentioned LMS-based nonnegative adaptive
filtering algorithms are constructed on the basis of the mean-
square-error (MSE) criterion because of its simple computa-
tions and optimality under Gaussian assumptions [15]–[18].
Nevertheless, the MSE-based adaptive filtering algorithms
may perform poorly in the non-Gaussian environment if the
signal are gotten in the presence of background noise of
heavy-tailed impulsive noises [19], [20]. To improve the con-
vergence performance for MSE-based algorithms, some alter-
native criteria beyond MSE have been considered [21]–[23].
In recent years, a new cost function named Lncosh cost
has been successfully designed for non-Gaussian signal
processing [22], [24], [25]. When the parameter λ of the
lncosh function is infinite, the lncosh function is equivalent to
the mean absolute error (MAE) that can be used for combating
impulsive noises. If λ is small, the lncosh function works like
MSE. Thus, lncosh function is a mixture of MSE and MAE
which is robust for preventing impulsive noises.
In this brief, a nonnegative least lncosh (NNLlncosh) algo-
rithm is proposed for nonnegativity constraint system identi-
fication problems especially in presence of impulsive noise.
Then, the proposed NNLlncosh is derived and an accurate
steady-state excess MSE (EMSE) for the NNLlncosh algo-
rithm using universal analysis model has been presented in
detail. Moreover, this framework can be widely used for
the steady-state EMSE analysis of the adaptive filters under
nonnegativity constraint whose cost function involves error
nonlinearities. Simulation results illustrate the accuracy of the
theoretical results and the proposed algorithm is very robust
in the case of impulsive noises, which prove the effective-
ness and demonstrate the superior performance of the proposed
NNLlncosh algorithm.
II. T
HE NONNEGATIVE LEAST LNCOSH ALGORITHM
A. Signal Model and the Lncosh Cost Function
Considering a universal adaptive estimation system, where
an ideal output d(n) is gotten via a unknown time varying
system d(n) = h
T
x(n) + z(n), where h R
M×1
denotes
the optimal parameter of the unknown system, x(n) =
[x(n), x(n 1),...,x(n M + 1)]
T
denotes a M-dimension
input vector, and z(n) accounts for modeling errors and back-
ground noise. The estimated error e(n) is e(n) = d(n)
ˆ
h
T
(n)x(n), with
ˆ
h
T
(n)x(n) is the observation of the filter out-
put, while
ˆ
h(n) = [
ˆ
h
1
(n),
ˆ
h
2
(n),...,
ˆ
h
M
(n)]
T
is filter’s weight
vector. Then, various cost function based on the error e(n)
was designed to find the optimal estimation of the unknown
system. In this brief, we consider a lncosh function which is
1549-7747
c
2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: ZTE CORPORATION. Downloaded on April 15,2024 at 06:27:46 UTC from IEEE Xplore. Restrictions apply.
2238 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS—II: EXPRESS BRIEFS, VOL. 68, NO. 6, JUNE 2021
given by
J

ˆ
h(n)

= E
1
λ
ln
[
cosh
(
λe(n)
)
]
, (1)
where λ>0, ln(.) represents a natural logarithm, E{.} denotes
the mathematical expectation and cosh(.) denotes hyperbolic
cosine function giving by cosh(x) =
exp(x)+exp(x)
2
. For prac-
tical feasibility, the expectation term in (1) has been replaced
by the instantaneous error [15], [22]. The gradient of J(
ˆ
h(n))
is computed herein. In a sequel, we get the derivative of the
lncosh cost function in regard to
ˆ
h(n)
ˆ
h
J

ˆ
h(n)

=
J{[
ˆ
h(n)]}
ˆ
h(n)
=
1
λ
ln
[
cosh
(
λe(n)
)
]
ˆ
h(n)
=
1
λ
ln
[
cosh
(
λe(n)
)
]
cosh
(
λe(n)
)
cosh
(
λe(n)
)
(
λe(n)
)
(
λe(n)
)
ˆ
h(n)
=
1
λ
sinh
(
λe(n)
)
cosh
(
λe(n)
)
λ
(
x(n)
)
=−tanh
(
λe(n)
)
x(n). (2)
B. The Nonnegative Least Lncosh Algorithm
In [14], Chen et al. points out that the nonnegativity is
a desired constraints that is frequently encountered in signal
processing field. The nonnegativity constraint is often exerted
on weight vector of the system for avoiding uninterpretable
results. Based on the optimization theory, the system identifi-
cation problems under nonnegativity constraint is formalized
as
h
0
= arg min
ˆ
h(n)
J

ˆ
h(n)

subject to
ˆ
h
j
(
n
)
0, j
{
1, 2,...,M
}
, (3)
where J[(
ˆ
h(n))] is a continuously differentiable and strictly
convex cost function, h
0
is the ideal solution of the
optimization problem under non-negative constraints.
For the purpose of solving the optimization problem in (3),
we consider the Lagrangian function giving by
L
λ,
ˆ
h(n)
= J
ˆ
h(n)
M
j=1
λ
j
h
j
(n), (4)
with λ is the nonnegative Lagrange multiplier vector.
Furthermore, for the optimum solution of
ˆ
h(n) and λ, h
0
,
λ
0
must necessarily satisfy the following Karush-Kuhn-Tucker
(KKT) conditions [15]
ˆ
h
L(λ
0
,
ˆ
h
0
) = 0, (5a)
λ
0
j
ˆ
h
0
j
= 0, j = 1,...,M, (5b)
where gradient operator
ˆ
h
is computed on
ˆ
h(n).
Implementing gradient calculation on (4) and combining (5a)
and (5b) lead to
h
0
j
−∇
ˆ
h
J(
ˆ
h
0
)
j
= 0, j = 1,...,M, (6)
where we use extra minus sign for taking a gradient descent on
J(
ˆ
h(n)). The problem of (6) can be considered to solve g(y) =
0, where g(y) is defined as the left of (6). Then, g(y) = 0
can be carried out by considering the problem y = y + g(y)
under some conditions on function g(). Based on the fixed-
point iteration method proposed in [15], we obtain
ˆ
h
j
(
n + 1
)
=
ˆ
h
j
(
n
)
+ μq
j
ˆ
h(n)
ˆ
h
j
(
n
)
−∇
ˆ
h
J
ˆ
h(n)

j
(7)
with step-size μ>0, and function q
i
(
ˆ
h(n)) > 0 is the reweight
updating of vector
ˆ
h(n).
Substituting (2) into (7), the NNLlncosh algorithm is
obtained by
ˆ
h
(
n + 1
)
=
ˆ
h
(
n
)
+ μf
[
e
(
n
)
]
D
x
(
n
)
ˆ
h
(
n
)
, (8)
where f [e(n)] = tanh [λe(n)] is a nonlinear function of e(n),
D
x
(n) is a diagonal matrix whose entries is composed of x(n)
and q
j
(
ˆ
h(n)) = 1 are chosen for all j.
III. S
TEADY STATE EMSE OF THE NNLLNCOSH
ALGORITHM
Here, the steady-state theoretical analysis of the
NNLlncosh algorithm is presented using Taylor series
expansion method, which is extensively adopted for steady-
state analysis of the adaptive filter algorithm with error
nonlinearities [9], [17], [22].
To get the convergence analysis, we write the weighting
error vector as
c
(
n
)
=
ˆ
h
(
n
)
h. (9)
Assuming the filter is stable and the step-size μ is suffi-
ciently small to guarantee that the algorithm is converged in
the steady-state and the mean weighting estimation at steady
state is defined as E[
ˆ
h()]. Then, the equation (9) can be
rewritten as
c
(
n
)
=
ˆ
h
(
n
)
E
ˆ
h
(
)

c
(
n
)
+ E
ˆ
h
(
)
h

E
[
c
(
)
]
, (10)
where c
(n) denotes the weight error vector in terms of the
mean for converged weights, and E[c()] denotes mean
weighting error, which is named as asymptotic bias [26].
By using (10), the error signal e(n) becomes
e
(
n
)
= z
(
n
)
c
T
(
n
)
x
(
n
)
E
c
T
(
)
x
(
n
)
= z
(
n
)
c
T
(
n
)
x
(
n
)
= z
(
n
)
e
a
(
n
)
, (11)
with z
(n) = z(n) E[c
T
()]x(n) and e
a
(n) = c
T
(n)x(n)
is defined for better understanding the subsequent derivation
process.
To get the theoretical expression of EMSE, the fol-
lowing assumptions are used, which have been widely
used for analyzing the convergence in adaptive filter
framework [9], [22], [26], [27].
A1: The measurement noise z(n) represents a zero-mean and
its variance is σ
2
z
, and it is independent of any other signals.
A2: x(n) is a independent, identically distributed zero-mean
Gaussian.
A3: e
a
(n) is zero-mean Gaussian variable and it is inde-
pendent of z
(n).
A4: The filter is long enough that x(n)
2
D
ˆ
h
(n)
is asymptot-
ically uncorrelated with f
2
[e(n)] at steady-state.
The EMSE is a popular measure of stead-state behavior
analysis, which is presented as
ς
EMSE
(
n
)
= E
e
2
(
n
)
σ
2
z
. (12)
Authorized licensed use limited to: ZTE CORPORATION. Downloaded on April 15,2024 at 06:27:46 UTC from IEEE Xplore. Restrictions apply.
SUN et al.: STEADY-STATE MEAN-SQUARE ERROR ANALYSIS FOR NON-NEGATIVE LEAST LNCOSH ALGORITHM 2239
Using A1 and substituting (11) into (12), yields
ς
EMSE
(
n
)
= trace
R
x
E
[
c
(
)
]
E
c
T
(
)


ς
+ E
x
T
c
(
n
)
2

ς
(
n
)
+2E
E
T
[
c
(
)
]
R
x
c
(
n
)
, (13)
where R
x
= E[x(n)x
T
(n)] denotes a positive definite covari-
ance matrix of x(n), and ς
(n) = E[e
2
a
(n)].
It is noted that the second term of (13) is deterministic.
And the third expectation converges to zero as n →∞.
Thus, steady state EMSE satisfies S = lim
n→∞
ς
EMSE
(n) =
lim
n→∞
ς
(n) + ς
, which means that the purpose to obtain
the steady-state EMSE is to evaluate ς
(n).
For easier analysis of S, the following definition that is
occasionally applied in analyzing nonnegativity constrained
optimization problems is adopted [26]. Firstly, the weight
vector is divided into two sets. The set H
+
includes all tap
indices of the weights whose tap coefficient expectation is to
be positive at steady-state, which is given by
H
+
=
j : E
ˆ
h
j
(
)
> 0
. (14)
The other set H
0
has weights’indices which converge in the
mean to zero, which denoted by
H
0
=
j : E
ˆ
h
j
(
)
= 0
. (15)
Define a diagonal matrix with diagonal elements are
¯
D
1
ˆ
h
(n)
jj
=
1
ˆ
h
j
(
n
)
, j H
+
0, j H
0
,
(16)
and diagonal matrix
¯
I with entries
¯
I
jj
=
1, j H
+
0, j H
0
.
(17)
Utilizing these definitions, the steady state behavior of
NNLlncosh is discussed. Subtracting E[
ˆ
h()] from both sides
of formula (8), we get
c
(
n + 1
)
= c
(
n
)
+ μ tanh
(
λe(n)
)
D
x
(
n
)
ˆ
h
(
n
)
. (18)
Then, considering the weighted square-norm (a direct
results of the energy conservation relation [9], [26], [28]),
we have
E
c
(
n + 1
)
2
¯
D
1
ˆ
h
(n)
= E
c
(
n
)
2
¯
D
1
ˆ
h
(n)
+ 2μE
c
T
(
n
)
¯
Ix
(
n
)
f
[
e
(
n
)
]
+ μ
2
E
x
T
(
n
)
¯
ID
ˆ
h
(
n
)
x
(
n
)
f
2
[
e
(
n
)
]
, (19)
where the weighted square-norm is given by x
2
¯
D
1
ˆ
h
(n)
=
x
T
¯
D
1
ˆ
h
(n)x which is often used in convergence analysis of
the nonnegative algorithm.
In steady-state, the averaged power of c
(n) satisfies
lim
n→∞
E
c
(
n + 1
)
2
¯
D
1
ˆ
h
(n)
= lim
n→∞
E
c
(
n
)
2
¯
D
1
ˆ
h
(n)
. (20)
Then, the equation (19) in steady-state becomes
2μ lim
n→∞
E
c
T
(
n
)
¯
Ix
(
n
)
f
[
e
(
n
)
]
=−μ
2
lim
n→∞
E
x
T
(
n
)
¯
ID
ˆ
h
(
n
)
x
(
n
)
f
2
[
e
(
n
)
]
. (21)
Herein, the Taylor series expansion for f is used to sim-
plify the calculation of the nonlinear term f[e(n)]. Considering
Taylor series on e
a
(n) around z
(n), yields
f
(
e
)
= f
e
a
(
n
)
+ z
(
n
)
= f
z
(
n
)
f
z
(
n
)
e
a
(n) +
1
2
f

z
(
n
)

e
a
(
n
)
2
+ o
(e
a
(
n
)
)
2
(22)
with o((e
a
)
2
) stands for higher order terms. The high-order
terms o((e
a
)
2
) can be neglected for small λ, and hence, the
expectation term on left-hand side (LHS) of (21) can then
approximate as
lim
n→∞
E
c
T
(
n
)
¯
Ix
(
n
)
f
[
e
(
n
)
]
= lim
n→∞
E
f
z
(
n
)
c
T
(
n
)
¯
Ix
(
n
)
f
z
(
n
)
c
T
(
n
)
¯
I
× x
(
n
)
e
a
(
n
)
+
1
2
f

z
(
n
)
c
T
(
n
)
¯
Ix
(
n
)
e
a
(
n
)
2
. (23)
Considering that lim
n→∞
E{c
T
(n)
¯
Ix(n)}=
lim
n→∞
E{e
a
(n)}, and then using A1 and A3, we get
lim
n→∞
E
c
T
(
n
)
¯
Ix
(
n
)
f
[
e
(
n
)
]
E
f
z
(
n
)

lim
n→∞
E
e
a
(
n
)
E
f
z
(
n
)

× lim
n→∞
E
e
a
(
n
)
2
+
1
2
E
f

z
(
n
)

lim E
e
a
(
n
)
3
n→∞
≈−E
f
z
(
n
)

ς
(
)
. (24)
Using A4, the expected value on right-hand side of (21) can
be rewritten as
lim
n→∞
E
x
T
(
n
)
¯
ID
ˆ
h
(
n
)
x
(
n
)
f
2
[
e
(
n
)
]
lim
n→∞
E
x
T
(
n
)
¯
ID
ˆ
h
(
n
)
x
(
n
)
E
f
2
[
e
(
n
)
]
. (25)
Following the same procedure like equation (22) and using
A3 again, the Taylor expansion of f
2
[e(n)]isgotten
E
f
2
[
e
(
n
)
]
E
f
2
z
(
n
)
2f
z
(
n
)
f
z
(
n
)
e
a
(
n
)
+
f
z
(
n
)
2
+ f
z
(
n
)
f

z
(
n
)
e
a
(
n
)
2
E
f
2
z
(
n
)
+
f
z
(
n
)
2
+ f
z
(
n
)
f

z
(
n
)
×
e
a
(
n
)
2
. (26)
Using (26) in (25), one can derive that
lim
n→∞
E
x
T
(
n
)
¯
ID
ˆ
h
(
n
)
x
(
n
)
f
2
[
e
(
n
)
]
Tr
E
D
ˆ
h
(
)
R
x
E
f
2
z
(
n
)
+
f
z
(
n
)
2
+ f
z
(
n
)
f

z
(
n
)
ς
EMSE
(
)
(27)
Authorized licensed use limited to: ZTE CORPORATION. Downloaded on April 15,2024 at 06:27:46 UTC from IEEE Xplore. Restrictions apply.
of 5
免费下载
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文档的来源(墨天轮),文档链接,文档作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。