搜档网
当前位置:搜档网 › Integration-Enhanced Zhang Neural Network for Real-Time-Varying Matrix Inversion

Integration-Enhanced Zhang Neural Network for Real-Time-Varying Matrix Inversion

Integration-Enhanced Zhang Neural Network for Real-Time-Varying Matrix Inversion
Integration-Enhanced Zhang Neural Network for Real-Time-Varying Matrix Inversion

Integration-Enhanced Zhang Neural Network for Real-Time-Varying Matrix Inversion in the Presence of Various Kinds of Noises Long Jin,Yunong Zhang,Member,IEEE,and Shuai Li,Member,IEEE

Abstract—Matrix inversion often arises in the?elds of science and engineering.Many models for matrix inversion usually assume that the solving process is free of noises or that the denoising has been conducted before the computation.However, time is precious for the real-time-varying matrix inversion in practice,and any preprocessing for noise reduction may con-sume extra time,possibly violating the requirement of real-time computation.Therefore,a new model for time-varying matrix inversion that is able to handle simultaneously the noises is urgently needed.In this paper,an integration-enhanced Zhang neural network(IEZNN)model is?rst proposed and investigated for real-time-varying matrix inversion.Then,the conventional ZNN model and the gradient neural network model are presented and employed for comparison.In addition,theoretical analyses show that the proposed IEZNN model has the global exponential convergence property.Moreover,in the presence of various kinds of noises,the proposed IEZNN model is proven to have an improved performance.That is,the proposed IEZNN model converges to the theoretical solution of the time-varying matrix inversion problem no matter how large the matrix-form constant noise is,and the residual errors of the proposed IEZNN model can be arbitrarily small for time-varying noises and random noises.Finally,three illustrative simulation examples,including an application to the inverse kinematic motion planning of a robot manipulator,are provided and analyzed to substantiate the ef?cacy and superiority of the proposed IEZNN model for real-time-varying matrix inversion.

Index Terms—Integration-enhanced Zhang neural network(IEZNN),random noise,real-time-varying matrix inversion,residual error,theoretical analysis.

I.I NTRODUCTION

V IEWED AS an essential step of many solutions,online matrix inversion often arises in mathematics and control

Manuscript received March19,2015;revised November2,2015;accepted November2,2015.This work was supported in part by the National Natural Science Foundation of China under Grant61473323and Grant61401385, in part by the Foundation of Key Laboratory of Autonomous Systems and Networked Control through the Ministry of Education,China,under Grant2013A07,in part by the Science and Technology Program,Guangzhou, China,under Grant2014J4100057,in part by the Hong Kong Research Grants Council Early Career Scheme under Grant25214015,and in part by the Departmental General Research Fund,The Hong Kong Polytechnic University,Hong Kong,under Grant G.61.37.UA7L.

L.Jin and Y.Zhang are with the School of Information Science and Technology,Sun Yat-sen University(SYSU),Guangzhou510006,China, also with the SYSU–Carnegie Mellon Univer-sity(CMU)Shunde International Joint Research Institute, Shunde528300,China,and also with the Key Laboratory of Autonomous Systems and Networked Control,Ministry of Education,Guangzhou510640, China(e-mail:jinlongsysu@https://www.sodocs.net/doc/f56966435.html,;zhynong@https://www.sodocs.net/doc/f56966435.html,).

S.Li is with the Department of Computing,The Hong Kong Polytechnic University,Hong Kong(e-mail:shuaili@https://www.sodocs.net/doc/f56966435.html,.hk).

Color versions of one or more of the?gures in this paper are available online at https://www.sodocs.net/doc/f56966435.html,.

Digital Object Identi?er10.1109/TNNLS.2015.2497715theory,and?nds its applications in communications[1], machine learning[2],and robotics[3],[4].Because of its fundamental roles,much effort has been devoted to the fast and high accuracy solution of the matrix inversion problem,and subsequently a great deal of models have been proposed and investigated for matrix inversion.Generally speaking, the techniques for matrix inversion can be classi?ed into two categories:1)recursive(or iterative)methods and2)direct methods[5]–[10].On one hand,recursive methods,such as the gradient-based or Gauss–Seidel methods,start with a given initial value and recursively update the estimate to improve the approximate solution until an approximation to the theoretical solution is obtained with a desired accuracy[11],[12].On the other hand,direct methods(e.g.,Gaussian elimination and Cholesky decomposition)typically compute the solution in a ?nite sequence of operations,which are able to generate exact solutions in the absence of rounding errors[6].However,for the problem with a large number of variables,the solving of nonlinear equations(including the matrix inversion problem) often has no choice but to seek the recursive methods due to the fact that direct methods are prohibitively expensive (or even impossible)in this case.In view of the fact that the minimal arithmetic operations of many serial-processing methods for matrix inversion are proportional to the cube of the matrix dimension,i.e.,O(n3)operations[3],[4],various parallel-processing computational models have been presented and investigated to speedup the processing.

In addition to the high-speed parallel-distributed processing property,neural networks can be readily implemented by hardware and,thus,have been applied widely in various ?elds[13]–[25].Especially,a large number of recurrent neural network(RNN)models have been presented and investigated as powerful alternatives to online scienti?c problems solving (including the matrix inversion)[14]–[25].A kind of explicit dynamic RNN model based on gradient descent was proposed in[14]for the inversion of the static matrix,which is proven to be asymptotically stable and capable of computing a large-scale nonsingular inverse matrix in real time.By converting the generalized inverse problem into a matrix norm optimization problem,an RNN model for computing the Drazin inverse of a real matrix in real time was proposed in[15].Following the network structure exploited in[14]and[15],another RNN model was proposed in[16],which is composed of a number of independent subnetworks corresponding to the columns of the Drazin inverse.Liu and Wang[17]presented a one-layer RNN model for solving nonsmooth optimization

2162-237X?2015IEEE.Personal use is permitted,but republication/redistribution requires IEEE permission.

See https://www.sodocs.net/doc/f56966435.html,/publications_standards/publications/rights/index.html for more information.

TABLE I

C OMPARISONS ON R ESIDUAL E RROR A MONG

D IFFERENT M ODELS FOR R EAL-T IME-V ARYING M ATRIX I NVERSION

problems,of which the global convergence conditions were derived by means of the Lyapunov method and nonsmooth analysis.Shen and Wang[20]analyzed the robustness of an RNN model with time delays and additive noise,for which the upper bounds of noise and time delays were derived to remain globally exponentially stable.A class of memristor-based RNN models was presented in[22],and the predictable assumptions on the boundedness and Lipschitz continuity of activation functions were formulated.Besides,an RNN model was explored in[23]to explain and generate the winner-take-all competition,which features a simple expression and extends the case with Euclidean norm term for neural interac-tion to the more general p-norm cases.For solving a problem, the RNN is often exploited by de?ning an ordinary differential equation(ODE),so that the solution of the objective problem corresponds to the stable equilibrium point of the dynamical ODE system[24].Therefore,the solution of the RNN forms a continuous trajectory that starts from the initial point and ends at the solution of the original problem.A gradient-based RNN(GNN)model was presented and investigated in[10]for online time-varying matrix inversion with detailed theoretical analysis,which only approximately approaches the theoretical inverse of the time-varying matrix,instead of converging exactly.An implicit RNN model together with its electronic realization was proposed and explored in[25]for online static matrix inversion,which achieves superior convergence performance in comparison with the GNN model.It is worth noting that,different from solving a static problem where we only interested in the?nal state,the solution of time-varying problem requires the result at each time instant for real-time processing purpose.As a novel type of RNN speci?cally designed for solving time-varying problems,the conventional Zhang neural network(CZNN)is able to perfectly track time-varying solution by exploiting the time derivative of time-varying parameters[4],[26],[27].In implementations of RNN,we usually assume that it is free of all kinds of noises or external errors.However,there always exist some realization errors in hardware implementation which can be deemed as the constant noises.For example,the model-implementation error appears most frequently in the hardware realization.Moreover,the environmental interference as well as other external errors can be viewed as the random noises. Simply speaking,all those noises have signi?cant impacts on the accuracy of the RNN for matrix inversion,and in some cases,these noises cause failure of the solving task. In addition,it is preferable to integrate denoising with problem solving for real-time processing because of the feature of the problem to be solved.That is,time is precious for the time-varying problem solving in practice,and any preprocessing for noise reduction may consume extra time,possibly violating the requirement of real-time computation.Therefore,it is worth investigating a modi?ed model for time-varying matrix inversion that is inherently more tolerant to noises and able to handle them simultaneously.

The remainder of this paper is organized into four sections. The problem formulation and the integration-enhanced ZNN(IEZNN)model are presented in Section II.For com-parison as well as for connection,the CZNN model and the GNN model are also introduced and investigated to solve the same problem in this section.Section III presents theoretical analyses to show that the proposed IEZNN model has a globally exponential convergence property.In addition, in the presence of various kinds of noises,the proposed IEZNN model is proven to be inherently tolerant to the noises. That is,the proposed IEZNN model converges to the theoret-ical solution of the time-varying matrix inversion problem no matter how large the matrix-form constant noise is,and the residual errors of the proposed IEZNN model can be arbitrarily small for time-varying noises and random noises.Section IV provides three illustrative simulation examples to substantiate the ef?cacy and superiority of the proposed IEZNN model for real-time-varying matrix inversion.Section V concludes this paper with?nal remarks.Before ending this section, comparisons among the GNN model,the CZNN model,and the IEZNN model for real-time-varying matrix inversion are listed in Table I for readers to see the advantages of IEZNN model over other existing solutions and the main contributions of this paper are pointed out below.

1)A novel IEZNN design formula is proposed and inves-

tigated for the?rst time in this paper,which is quite different from and superior to Zhang et al.’s previous design formula[3],[4].

2)Based on the novel design formula,a new IEZNN model

is proposed for real-time-varying matrix inversion in the presence of various kinds of noises.Note that there is no neural network model published on real-time-varying matrix inversion with the capability of noise suppressing.

3)Theoretical analyses and results are presented in this

paper,which guarantee that the proposed IEZNN model globally converges to the exact real-time inverse of the time-varying matrix in an exponential manner.

In addition,the proposed IEZNN model is proven to be inherently tolerant to various kinds of noises.

4)Computer simulation results are illustrated to substanti-

ate the ef?cacy and superiority of the proposed IEZNN model for real-time-varying matrix inversion in the presence of various kinds of noises.

JIN et al.:IEZNN FOR REAL-TIME-V ARYING MATRIX INVERSION

3

II.P ROBLEM F ORMULATION AND IEZNN S OLUTION In order to lay a basis for further investigation,the problem formulation and the design procedure of the IEZNN model are presented in this section.A.Problem Formulation

In this paper,we are concerned with the time-varying matrix inversion in the form of

A (t )X (t )?I =0∈R n ×n

(1)

where A (t )∈R n ×n is a smoothly time-varying coef?cient matrix,I ∈R n ×n is an identity matrix,and X (t )∈R n ×n is a time-varying unknown matrix to be obtained.The goal is to solve (1)for X (t )in real time and in an error-free manner (or say,in a near-zero manner in practice)in spite of various kinds of noises.To lay a basis for further discussion,we make the assumptions that A (t )is nonsingular at any time instant t ∈[0,+∞)and that A (t )together with its time derivative is uniformly bounded.

In the literature,gradient-based approaches and other traditional methods have been developed to compute the inverse of a static matrix and further explored in time-varying case with considerable lagging errors [3],[10].To eliminate the lagging errors and solve the time-varying matrix inversion in real time,the CZNN has been presented and employed with the globally exponential convergence [3],[4].However,considering the facts that the time derivative of the time-varying parameter A (t )plays an important and indispensable role in the CZNN and that various kinds of noises (e.g.,the interference noise,the model-implementation error,and the random noises comprised of the circumstance noise and the measurement error)are inevitable,the CZNN may not work well in real situation with noises.To solve the time-varying matrix inversion problem in real time and accurately in spite of noises,an IEZNN design formula as well as its associated model is proposed in Section II-B.B.IEZNN Model

To monitor the time-varying matrix-inversion process,we de?ne the following matrix-valued inde?nite error function :

E (t )=A (t )X (t )?I .

The time derivative of the error function should be made such that each entry e ij (t ),i ,j =1,...,n of E (t )converges to zero with the integration of e ij (t )being also zero.Speci?cally,we can de?ne the following evolution for E (t ):

˙E

(t )=?γE (t )?λ t

E (τ)d τ(2)where γ>0and λ>0are the design parameters to scale the convergence rate of the IEZNN.The novel IEZNN design formula (2)leads to the following implicit neural network model :

A (t )˙X

(t )=?˙A (t )X (t )?γ(A (t )X (t )?I )?λ t

(A (τ)X (τ)?I )d τ

(3)

where X (t ),starting from an initial state X (0),is the state

matrix corresponding to the theoretical solution of (1).

For comparison,the CZNN model for time-varying matrix inversion is directly given as [3]

A (t )˙X

(t )=?˙A (t )X (t )?γ(A (t )X (t )?I ).(4)

In addition,the GNN model solving the time-varying matrix

inversion problem is given as [10]

˙X

(t )=?γA T (t )A (t )X (t )+γA T (t ).(5)

Remark 1:Both IEZNN model (3)and CZNN model (4)

are shown in implicit dynamics,each of which can be reformu-lated as an interconnected system with local neural dynamics and can be solved ef?ciently.For the simulation run on the MATLAB,the Kronecker product and the vectorization techniques [26],[28],[29]can be exploited to transform such matrix-form IEZNN model (3)to vector-form differen-tial equations.In addition,via the MATLAB routine ode45,IEZNN model (3)can be transformed as an initial-value ODE problem with a mass matrix to be simulated and com-puted more easily and effectively.Besides,it is worth noting that,with simple operations,the implicit systems can be transformed into explicit systems,if necessary.

To lay a basis for further investigation on the robustness of the proposed IEZNN model (3)under the pollution of unknown noises,we have the following equation :

A (t )˙X

(t )=?˙A (t )X (t )?γ(A (t )X (t )?I )?λ t

(A (τ)X (τ)?I )d τ+W (t )

(6)

where W (t )∈R n ×n denotes the matrix-form noises,such as the constant implementation error,the time-varying bias error,the fast varying noises,the random noises,or their superposition.Note that any preprocessing for noise reduction may consume extra time,possibly violating the requirement of real-time computation.The proposed IEZNN model is able to suppress various kinds of noises and compute the time-varying matrix inverse simultaneously.The corresponding theoretical analyses and results are presented in Section III.

III.T HEORETICAL A NALYSES AND R ESULTS

For the CZNN model designed for real-time-varying matrix inversion,it has been proven that it converges to the theoretical solution globally and exponentially [3].In this section,we prove that the proposed IEZNN model (3)also globally and exponentially converges to the theoretical solution.In addition,in the presence of unknown matrix-form constant noise,the proposed IEZNN model (3)is proven to be convergent to the theoretical solution.Moreover,for the time-varying matrix noises and random noises,the steady-state residual error lim t →∞ E (t ) F of the proposed IEZNN model (3)can be arbitrarily small for enough large γwith suitable λ.A.Convergence of IEZNN

In this section,two theorems are presented to investigate the convergence performance of IEZNN model (3)without noise.

4

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Theorem 1:IEZNN model (3)globally converges to the theoretical solution of time-varying matrix inversion problem (1),or say,the theoretical time-varying matrix inverse of (1).Proof:˙E (t )=?γE (t )?λ

t 0

E (τ)d τis a compact matrix-form of the following set of n 2equations :

˙e ij (t )=?γe ij (t )?λ t

e ij (τ)d τ?i ,j ∈1,...,n .(7)

De?ne a Lyapunov function candidate [30],[31]for the

i j th subsystem (7)as

v ij (t )=e 2

ij (t )+λ t 0

e ij (τ)d τ

2

which guarantees the positive-de?niteness of Lyapunov func-tion candidate v ij (t ),i.e.,v ij (t )>0for any e ij (t )=0or t

0e ij (τ)d τ=0,and v ij (t )=0only for

e ij (t )= t

0e ij (τ)d τ=0.The time derivative of Lyapunov function candidate can be obtained as

d v ij dt =2

e ij (t )˙e ij (t )+2λe ij (t ) t

0e ij (τ)d τ=?2γe 2

ij (t )≤0.Based on Lyapunov theory,it can be concluded that e ij (t )of the i j th subsystem (7)globally converges to zero.In addition,it can be generalized and concluded that e ij (t )globally converges to zero for any i ,j ∈{1,...,n }.Therefore,it can be summarized and generalized that E (t )globally converges to zero.That is,IEZNN model (3)globally converges to the theoretical solution of time-varying matrix inversion problem (1).The proof on global convergence is thus completed. It is worth investigating here the convergence speed of the proposed IEZNN model (3),and thus,we have the following theorem.

Theorem 2:IEZNN model (3)exponentially converges to the theoretical solution of time-varying matrix inversion problem (1).Proof:Let ε(t )= t

0E (τ)d τand let e ij (t ),εij (t ),˙εij (t ),and ¨εij be the i j th element of E (t ),ε(t ),˙ε(t ),and ¨ε(t ),respectively.The i j th subsystem of the second-order

dynamical system ˙E (t )=?γE (t )?λ t 0

E (τ)d τcan be rewritten as

¨εij (t )=?γ˙ε

ij (t )?λεij (t ).(8)

Let θ1=(?γ+(γ2?4λ)1/2

)/2and θ2=(?γ?

(γ2?4λ)1/2

)/2.In view of that the initial values εij (0)=0and ˙εij (0)=e ij (0),the analytical solution to (8)falls into one of the following three situations.

1)For θ1=θ2and θ1and θ2being real numbers,i.e.,γ2>4λ,we obtain

εij (t )=

e ij (0)(exp (θ1t )?exp (θ2t ))

γ2?4λ

and further obtain

e ij (t )=

e ij (0)(θ1exp (θ1t )?θ2exp (θ2t ))

γ2?4λ

.In addition,the matrix-form error can be generalized as

E (t )=

E (0)(θ1exp (θ1t )?θ2exp (θ2t ))

γ2?4λ.2)For θ1=θ2,i.e.,γ2=4λ,we obtain

εij (t )=e ij (0)t exp (θ1t )

and further obtain

e ij (t )=e ij (0)exp (θ1t )+e ij (0)θ1t exp (θ1t ).In addition,the matrix-form error can be generalized as

E (t )=E (0)exp (θ1t )+E (0)θ1t exp (θ1t ).

3)For θ1=α+i βand θ2=α?i βbeing conjugate complex numbers,i.e.,γ2<4λ,we obtain

εk (t )=e ij (0)sin (βt )exp (αt )/β

and further obtain

e ij (t )=e ij (0)exp (αt )(αsin (βt )/β+cos (βt )).In addition,the matrix-form error can be generalized as

E (t )=E (0)exp (αt )(αsin (βt )/β+cos (βt )).

Summarizing the previous analysis of the three situations and according to [32,Proof of Theorem 1],we come to the conclusion that,starting from any initial condition,IEZNN model (3)exponentially converges to the theoretical solution of time-varying matrix inversion problem (1).The proof is thus completed. Remark 2:As shown in Theorem 2,the residual error of IEZNN model (3)for time-varying matrix inversion (1)is exponentially convergent to zero.In general,a fast conver-gence rate requires a suf?ciently large γand a suitable λ.In fact,within the time-period of 4/γs for the situation of γ2<4λ,|e ij (t )|[being the absolute value of the i j th element of E (t )]would be less than 1.85%of |γe ij (0)|,?i ,j ∈{1,2,...,n }.That is to say,when γ=400,|e ij (t )|is less than 7.4×|e ij (0)|within 0.01s and less than 1.7×10?15×|e ij (0)|within 0.1s.Note that ?oating-point numbers have limited precision in computer,e.g.,the minimum precision of ?oating-point number eps in the MATLAB environment is of order 10?16(i.e.,2?52).Thus,this result is appropriate in practical applications.With residual error being less than 1.7×10?15×|e ij (0)|within 0.1s,the time-varying matrix inversion problem (1)can be viewed as being solved pointwise in time t .

Remark 3:In terms of time-varying matrix inversion problem (1),IEZNN model (3)is an equivalent expansion of IEZNN design formula (2).For a better understanding of the proposed IEZNN design formula (2),the role of each term in IEZNN design formula (2)can be interpreted from the view-point of control with its realization represented as a control system shown in Fig.1.From Fig.1,we can ?nd that the IEZNN model (3)can be deemed as a nonlinear proportional–integral-derivative controller with ?γ(A (t )X (t )?I )as the proportional part,λ t 0(A (τ)X (τ)?I )d τas the inte-gral part,and ?˙A

(t )X (t )as the derivative part.For such

JIN et al.:IEZNN FOR REAL-TIME-V ARYING MATRIX INVERSION5 Fig.1.Realization of the IEZNN model(3)for solving time-varying matrix inversion problem(1)represented as a control system.As the objective of the

control,the plant represents the problem to be solved,and controller represents the IEZNN model(3).

a control system,it has been proven in Theorems1and2

that E(t)globally and exponentially converges to zero,which

means that IEZNN design formula(2)has the property of

globally exponential stability.

B.IEZNN in the Presence of Noises

In this section,three theorems are presented to investigate

the performance of IEZNN model(3)in the presence of

various kinds of noises.

1)Constant Noise:For the constant noise,we have the

following theorem.

Theorem3:The noise-polluted IEZNN model(6)

converges to the theoretical solution of(1)globally no

matter how large the unknown matrix-form constant noise

W(t)=W∈R n×n is.

Proof:Using Laplace transformation[33]to the

i j th subsystem of the noise-polluted IEZNN model(6)leads to

se ij(s)?e ij(0)=?γe ij(s)?λ

s

e ij(s)+w ij(s).(9)

That is

e ij(s)=s(e ij(0)+w ij(s))

s2+sγ+λ

.(10)

Evidently,the transfer function is s/(s2+sγ+λ)with its poles being s1=(?γ+(γ2?4λ)1/2)/2and s2=(?γ?(γ2?4λ)1/2)/2.Forγ>0andλ>0,one can conclude that these two poles locate on the left half-plane,which implies the stability of this system.For such a stable system,the ?nal value theorem applies.Notice that w ij(s)=w ij/s as w ij(t)=w ij amounts to a step signal for constant matrix W. Using the?nal value theorem[33]to(10),we have

lim t→∞e ij(t)=lim

s→0

se ij(s)

=lim

s→0

s2(e ij(0)+w ij/s)

s2+sγ+λ

=0.

Therefore,it can be concluded that lim t→∞ E(t) F=0.The proof is thus completed.

2)Linear Noise:However,in practical applications,the noises can be linear,e.g.,the time-varying matrix with each element being w ij t with w ij being a constant.To further demonstrate the superiority of the proposed IEZNN model(3), we have the following theorem.

Theorem4:Consider the noise-polluted IEZNN model(6) with the unknown matrix-form linear time-varying noise W(t)=Wt∈R n×n.The noise-polluted IEZNN model(6) converges toward the theoretical solution of(1)with the upper bound of the steady-state residual error lim t→∞ E(t) F being W F/λ.Furthermore,the steady-state residual error

lim t→∞ E(t) F decreases to zero asλtends to positive in?nity.

Proof:Using Laplace transformation[33]to the i j th subsystem of the noise-polluted IEZNN model(6)leads to

se ij(s)?e ij(0)=?γe ij(s)?

λ

s

e ij(s)+w ij/s2

where w ij/s2is the Laplace transformation of w ij https://www.sodocs.net/doc/f56966435.html,ing the ?nal value theorem to the above equation,we have

lim

t→∞

e ij(t)=lim

s→0

se ij(s)

=lim

s→0

s2(e ij(0)+w ij/s2)

s2+sγ+λ

=

w ij

λ

.

Therefore,it can be readily concluded that

lim

t→∞

E(t) F=

W F

λ

.

In addition,we have lim t→∞ E(t) F→0forλ→∞. The proof is thus completed. Note that,for the CZNN model(4),both theoretical and simulation results show that the residual error goes to in?nity with the presence of unknown matrix-form linear time-varying noise.Thus,the CZNN model(4)cannot handle such a kind of noise,which demonstrates the superiority the proposed IEZNN model(3).

6IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 3)Bounded Random Noise:It is worth investigating here

the performance of the proposed IEZNN model(6)in the

presence of nonlinear time-varying noises or even random

noises.For the nonlinear fast time-varying noises,low-pass

?lter can be applied to tackle the error compensation problem.

However,as stated in Section I,the preprocessing for noises

reduction may consume extra time and possibly violates the

requirement of real-time computation.Note that the nonlinear

time-varying noise can be deemed as a random noise in

the time-varying matrix inversion process,and we have the

following theorem for the performance of the proposed IEZNN

model(6)in the presence of unknown matrix-form random

noise.

Theorem5:The residual error E(t) F of noise-

polluted IEZNN model(6)is bounded for the bounded

unknown matrix-form random noise W(t)=σ(t)∈R n×n.

In addition,the steady-state residual error lim t→∞ E(t) F

of noise-polluted IEZNN model(6)is bounded

by2n sup0≤τ≤t|σij(τ)|/(γ2?4λ)1/2forγ2>4λ,or

4nλsup0≤τ≤t|σij(τ)|/(γ(4λ?γ2)1/2)forγ2<4λwith

σij(t)denoting the i j th element ofσ(t).That is,the upper

bound of lim t→∞ E(t) F is approximately in inverse propor-

tion toγand the steady-state residual error lim t→∞ E(t) F

can be arbitrarily small for large enoughγwith suitableλ.

Proof:Rewrite the bounded unknown matrix-form random

noise-polluted system as

˙E(t)=?γE(t)?λ

t

E(τ)dτ+σ(t)

of which the i j th(?i,j∈1,...,n)subsystem can be

written as

˙e ij(t)=?γe ij(t)?λ t

e ij(τ)dτ+σij(t).(11)

According to the values ofγandλ,the analyses can be divided into the following three situations.

1)Forγ2>4λ,the solution to subsystem(11)can be

obtained as

e ij(t)=e ij(0)(θ1exp(θ1t)?θ2exp(θ2t))

(θ1?θ2)

+

t

(θ1exp(θ1(t?τ))?θ2exp(θ2(t?τ)))

×σij(τ)dτ

1

(θ1?θ2)

whereθ1andθ2are de?ned the same as the previous ones in Theorem2,i.e.,θ1,2=(?γ±(γ2?4λ)1/2)/2. From the triangle inequality,we have

|e ij(t)|≤|e ij(0)(θ1exp(θ1t)?θ2exp(θ2t))|

(θ1?θ2)

+

t

|θ1exp(θ1(t?τ))||σij(τ)|dτ

(θ1?θ2)

+

t

|θ2exp(θ2(t?τ))||σij(τ)|dτ

(θ1?θ2)

.

We further have

|e ij(t)|≤

|e ij(0)(θ1exp(θ1t)?θ2exp(θ2t))|

(θ1?θ2)

+2

(θ1?θ2)max

0≤τ≤t

|σij(τ)|

=

|e ij(0)(θ1exp(θ1t)?θ2exp(θ2t))|

(θ1?θ2)

+2

γ2?4λ

max

0≤τ≤t

|σij(τ)|.

Finally,we have

lim

t→∞

E(t) F≤2n

γ2?4λ

sup

0≤τ≤t

|σij(τ)|.

2)Forγ2=4λ,the solution to subsystem(11)can be

obtained as

e ij(t)=e ij(0)tθ1exp(θ1t)+e ij(0)exp(θ1t)

+

t

((t?τ)θ1exp(θ1(t?τ)))σij(τ)dτ

+

t

exp(θ1(t?τ))σij(τ)dτ

whereθ1is de?ned the same as the previous

one in Theorem2,i.e.,θ1=(?γ+(γ2?4λ)

1/2)/2=?γ/2.According to[32,Proof of Theorem1],

there existμ>0andν>0,such that

|θ1|t exp(θ1t)≤μexp(?νt).

Thus,based on the above inequality as well as the

triangle inequality,we have

|e ij(t)|≤|e ij(0)(θ1t exp(θ1t)+exp(θ1t))|

+

t

|μexp(?ν(t?τ))||σij(τ)|dτ

+

t

|exp(θ1(t?τ))||σij(τ)|dτ.

We further have

|e ij(t)|≤|e ij(0)(θ1t exp(θ1t)+exp(θ1t))|

+

μ

ν

?1

θ1

max

0≤τ≤t

|σij(τ)|.

Finally,we have

lim

t→∞

E(t) F≤

μ

ν

?1

θ1

n sup

0≤τ≤t

|σij(τ)|.

3)Forγ2<4λ,the solution to subsystem(11)can be

obtained as

e ij(t)=e ij(0)exp(αt)(αsin(βt)/β+cos(βt))

+

t

(αsin(β(t?τ))exp(α(t?τ))/β

+cos(β(t?τ))exp(α(t?τ)))σij(τ)dτ

whereαandβare de?ned the same as the previous ones

in Theorem2,i.e.,α=?γ/2andβ=(4λ?γ2)1/2)/2.

JIN et al.:IEZNN FOR REAL-TIME-V ARYING MATRIX INVERSION

7

Fig.2.Residual errors ||E (t )||F =||A (t )X (t )?I ||F of the IEZNN model (3)with six randomly generated initial states X 0∈[?2,2]2×2for solving time-varying inverse of matrix (12).(a)Neural states of the IEZNN model (3)for solving time-varying inverse of matrix (12)with γ=10and λ=10,i.e.,θ1,2=?10±2√15,where the red dashed-dotted curves correspond to the theoretical solution and the blue solid curves correspond to the neural-network solutions.(b)Residual errors ||E (t )||F of the IEZNN model (3)with γ=10and λ=10.(c)Residual errors ||E (t )||F of the IEZNN model (3)with γ=100and λ=100,i.e.,θ1,2=?100±20√15.

Thus,based on triangle inequality,we can similarly have |e ij (t )|≤|e ij (0)exp (αt )(αsin (βt )/β+cos (βt )))|

?

α2+β2αβ

max 0≤τ≤t |σij (τ)|

=|e ij (0)exp (αt )(αsin (βt )/β+cos (βt ))|

+4λ

γ

4λ?γ2max 0≤τ≤t |σij (τ)|.Finally,we have

lim t →∞

E (t )

F ≤

4λn

γ

4λ?γ2

sup 0≤τ≤t

|σij (τ)|.

It is thus summarized and generalized from the above analysis of the three situations that,in the presence of the bounded unknown matrix-form random noises σ(t ),the residual error E (t ) F of noise-polluted IEZNN model (6)is bounded.In addition,the steady-state residual error lim t →∞ E (t ) F of noise-polluted IEZNN model (6)

is bounded by 2n sup 0≤τ≤t |σij (τ)|/(γ2?4λ)1/2

for γ2>4λ,

or 4n λsup 0≤τ≤t |σij (τ)|/(γ(4λ?γ2)1/2

)for γ2<4λ.That is,the upper bound of lim t →∞ E (t ) F is approximately in inverse proportion to γand the steady-state residual error lim t →∞ E (t ) F can be arbitrarily small for large enough γwith suitable λ.The proof is thus completed. Remark 4:For any noise,it may be decomposed into the following three parts:1)the constant part;2)the linear time-varying part;and 3)the rest.Note that IEZNN design formula (2)is a linear system,and for such a system,the principle of superposition applies.That is,the output steady-state residual error of IEZNN model (3)with the constant part is zero.Therefore,for a nonzero mean random noise,an aggressive output error bound can be obtained by separately considering the constant part and the rest.

IV.I LLUSTRATIVE E XAMPLES

In the Section III,the IEZNN model (3)with its convergence and robustness analyses has been presented.In this section,computer simulations based on two time-varying matrices are provided to verify the ef?cacy and

superiority of the proposed IEZNN model (3)for the

real-time-varying matrix inversion in the presence of various kinds of noises.A.Example 1

In this example,the following time-varying matrix A (t )is considered for illustration and comparison,which is the same matrix as in [3]:

A (t )= sin (t )cos (t )

?cos (t )sin (t ) ∈R 2×2.

(12)For checking the correctness of the proposed IEZNN model (3),the theoretical time-varying inverse of matrix (12)is directly given as

A ?1

(t )=

sin (t )?cos (t )cos (t )sin (t ) ∈R 2×2.(13)The following four situations are considered to conduct the

corresponding computer simulations for time-varying matrix inversion (12),i.e.,the situations of zero noise,constant noise,linear time-varying noise,and nonlinear time-varying noise or random noise.Now,let us ?rst investigate the situation of zero noise.

1)Zero Noise:The simulation results synthesized by the proposed IEZNN model (3)for solving time-varying inverse of matrix (12)are shown in Fig.2.Speci?cally,as shown in Fig.2(a),starting from six randomly generated initial states X 0∈[?2,2]2×2,state matrices of the proposed IEZNN model (3)denoted by the blue solid curves all converge to the theoretical solution [i.e.,A ?1(t )]denoted by red dashed-dotted curves rapidly and accurately within a rather short time.In addition,the trajectories of the residual error ||E (t )||F =||A (t )X (t )?I ||F of IEZNN model (3)with γ=10and λ=10are shown in Fig.2(b),from which we can ?nd that the residual errors of IEZNN model (3)diminish to zero within 6s.As proven in Theorem 2that the IEZNN model (3)has an exponential convergence property,its convergence can be expedited by increasing γand λ.From Fig.2(c),we can see that the residual errors of the IEZNN model (3)with γ=100and λ=100converge to zero within ~0.6s,

8IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Fig.3.Residual errors||E(t)||F of the IEZNN model(3),the CZNN model(4),and the GNN model(5)with randomly generated initial state X0∈[?2,2]2×2 for solving time-varying inverse of matrix(12)in the presence of matrix-form constant noise with each element being10.(a)Withγ=10for three models andλ=10for the IEZNN model(3).(b)Withγ=100for three models andλ=100for the IEZNN model(3).

which veri?es the exponential convergence property proven in Theorem2.Moreover,these simulation results demonstrate the ef?cacy of IEZNN model(3)for solving the time-varying matrix inversion problem in the noise-free situation.Note that the investigations on the convergence performance of CZNN model(4)and GNN model(5)for time-varying matrix inversion have been conducted in[3]and[10]and are thus omitted here.In short,as shown in[3]and[10],the residual error of CZNN model(4)converges to zero exponentially and globally,while GNN model(5)generates only approximate results for the time-varying inversion of matrix(12)with much larger lagging errors.

2)Constant Noise:In the implementation of RNN, the corresponding model-implementation error is hard to avoid and can be viewed as the constant bias noise added to the RNN model.It is worth noting that the constant bias noise degrades the performance of some models [e.g.,GNN model(5)]and sometimes they fail to solve the problem for a large constant bias noise.Especially,models for the time-varying matrix inversion are more fragile to the constant noise due to the high requirements on real-time performance.

For illustration as well as for comparison,each element of the matrix-form constant noise is set to be10and the corresponding simulation results are shown in Fig. 3. Speci?cally,Fig.3(a)shows the residual errors||E(t)||F of the IEZNN model(3),the CZNN model(4),and the GNN model(5)withγ=10andλ=10.As shown in Fig.3, starting with the randomly generated initial state,the residual error||E(t)||F of the IEZNN model(3)diminishes to zero within5s,which veri?es Theorem3.On the contrary,the residual errors of the CZNN model(4)and the GNN model(5) do not converge to zero and remain at the relative high level. It can be observed from Fig.3(b)that,withγ=100and

λ=100,the residual error||E(t)||F of the IEZNN model(3) rapidly diminishes to zero within0.5s.Besides,the residual errors of the CZNN model(4)and the GNN model(5)with

γ=100still do not converge to zero,even though each of them is smaller than that of the CZNN model(4)and the GNN model(5)withγ=10.

3)Linear Time-Varying Noise:In this section,we inves-tigate linear time-varying noise.The simulation results of the IEZNN model(3),the CZNN model(4),and the GNN model(5)under matrix-form linear time-varying noise with each element being t are shown in Fig.4.

As visualized in Fig.4(a),starting with a randomly gener-ated initial state,the residual error of the IEZNN model(3) withγ=λ=10rapidly converges toward zero and remains stable around0.2.In contrast,the residual errors of the CZNN model(4)and the GNN model(5)withγ=10 have an increasing trend as time evolves and each of them is10times larger than that of the IEZNN model(3)at t=10s,which further shows the superiority of the proposed IEZNN model(3).In addition,the residual error of the IEZNN model(3)withγ=λ=100is shown in Fig.4(b),which also rapidly converges toward zero and remains stable at the order of10?3.Besides,the residual errors of the CZNN model (4)and the GNN model(5)withγ=100also have an increasing trend as time evolves and each of them is around 100times larger than that of the IEZNN model(3)at t=10s. In summary,these results verify Theorems4and5,and show the superiority of the proposed IEZNN model(3)for solving time-varying matrix inversion problem(1).

4)Bounded Random Noise:In the solving process of real-time-varying matrix inversion problem(1),noise is an external error or undesired disturbance,which misdirects the conventional model to evolve along a wrong direction. Numerous methods have been presented and investigated for denoising,such as Wiener?ltering and Kalman?ltering as well as their extensions[34]–[36].However,by consider-ing the facts that many types of noises may not satisfy the requirements of the denoising method,and that any preprocessing for noise reduction may consume extra time, possibly violating the requirement of real-time computation, conventional denoising methods may be not available for real-time-varying matrix inversion problem(1).In addition,

JIN et al.:IEZNN FOR REAL-TIME-V ARYING MATRIX INVERSION

9

Fig.4.Residual errors||E(t)||F of the IEZNN model(3),the CZNN model(4),and the GNN model(5)with randomly generated initial state X0∈[?2,2]2×2 for solving time-varying inverse of matrix(12)under matrix-form linear time-varying noise with each element being t.(a)Withγ=10for three models and λ=10for the IEZNN model(3).(b)Withγ=100for three models andλ=100for the IEZNN model(3).

Fig.5.Residual errors||E(t)||F of the IEZNN model(3),the CZNN model(4),and the GNN model(5)with randomly generated initial states X0∈[?2,2]2×2 for solving time-varying inverse of matrix(12)in the presence of matrix-form random noiseσ(t)∈[18,22]2×2.(a)Withγ=10for three models andλ=10 for the IEZNN model(3).(b)Withγ=100for three models andλ=100for the IEZNN model(3).

the nonlinear time-varying noises can be deemed as random

noises.Therefore,it is worth investigating the performance of IEZNN model(3)in the presence of matrix-form random

noises.The simulation results of the IEZNN model(3),the CZNN model(4),and the GNN model(5)with matrix-form

random noise W(t)=[18,22]2×2are shown in Fig.5.Note

that the matrix-form random noiseσ=[18,22]2×2can be deemed as the superposition of a matrix-form constant noise

W(t)=[20]2×2and a matrix-form zero-mean random

noiseσ(t)=[?2,2]2×2.As discussed in Remark4,the output steady-state residual error of the IEZNN model(3)

for the constant noise part of the input is zero.Therefore, the steady-state residual error of the IEZNN model(3)for σ(t)=[18,22]2×2is the same as that forσ(t)=[?2,2]2×2. It can be seen from Fig.5(a)that,starting with randomly generated initial state,the residual error of the

IEZNN model(3)withγ=λ=10converges to zero in~6s

and remains at an order of10?3.In contrast,the residual errors of the CZNN model(4)and the GNN model(5)with γ=10do not converge to zero and remain at a relative high level.In addition,the residual error of the IEZNN model(3) withγ=λ=100is shown in Fig.5(b),which also rapidly converges toward zero and remains stable at the order of10?4. Besides,the residual errors of the CZNN model(4)and the GNN model(5)withγ=100do not converge to zero as time evolves and each of them is around1000times larger than that of the IEZNN model(3)at t=9.43s.

In summary,the above simulation results,i.e.,Figs.2–5, have illustrated the ef?cacy and superiority of the proposed IEZNN model(3)for time-varying matrix inversion(1)with inherent tolerance to various kinds of noises.

B.Example2

For further investigation,the proposed IEZNN model(3) is simulated for the time-varying Toeplitz matrix inversion.

10

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

Fig.6.Residual errors ||E (t )||F of the IEZNN model (3)with randomly generated initial state X 0∈[?2,2]2×2,γ=10and different values of λfor solving time-varying inverse of matrix (14).(a)Residual errors ||E (t )||F of the IEZNN model (3)with zero noise.(b)Residual errors ||E (t )||F of the IEZNN model (3)in the presence of matrix-from constant noise with each element being 10.(c)Residual errors ||E (t )||F of the IEZNN model (3)in the presence of matrix-form time-varying noise with each element being t .(d)Residual errors ||E (t )||F of the IEZNN model (3)in the presence of matrix-form random noise σ(t )∈[?0.5,0.5]4×4.

The following time-varying Toeplitz

matrix is considered :

A (t )=?

???

???a 11(t )a 12(t )...a 1n (t )a 21(t )a 22(t )...a 2n (t )a 31(t )a 32(t )...

a 3n (t )......

......a n 1(t )

a n 2(t )

...

a nn (t )

???????∈R n ×n (14)

with a ij (t )denotes the i j th element of

A (t ).Thereinto a ij (t )=?????

n +sin (5t ),

i =j cos (5t )/(i ?j ),

i >j sin (5t )/(j ?i ),

i

Due to the complexity of matrix (14)(with 16elements in this paper),the analytical theoretical inverse solution is dif?cult to be obtained.Therefore,we only present the residual errors E (t ) F = A (t )X (t )?I F synthesized by the IEZNN model (3)with different situations of γ2?4λ.That is,with the value of γbeing 10,the values of λbeing 10,25,and 30denote the situation of γ2?4λ>0,γ2?4λ=0,

and γ2?4λ<0,respectively.The corresponding simulation results of the IEZNN model (3)are shown in Fig.6,with these of the CZNN model (4)and the GNN model (5)omitted,and can be found in [4].

Speci?cally,as shown in Fig.6(a),all the residual errors of the IEZNN model (3)using different values of λconverge to zero rapidly for the zero noise situation.It is worth noting that the accuracy of the routine ode45in the MATLAB is of order 10?6by default [37],and thus,the highest solution accuracy of the IEZNN model (3)in the MATLAB is the same order by default.That is,with the residual errors of the IEZNN model (3)in Fig.6(a)being of order 10?6,the time-varying matrix problem (14)can be viewed as being solved in an accurate manner in real time.In addition,the simulation results of the IEZNN model (3)solving for time-varying inverse of matrix (14)under the matrix-form constant noise are shown in Fig.6(b),from which we can observe that the residual errors of the IEZNN model (3)with different parameters also converge to near zero rapidly.Moreover,the simulation results under matrix-form linear time-varying noise

JIN et al.:IEZNN FOR REAL-TIME-V ARYING MATRIX INVERSION11

Fig.7.Motion trajectories,desired path,actual trajectory,and position-error pro?les synthesized by the CZNN model in the presence of matrix-form time-varying noise with each element being t.(a)Motion trajectories.(b)Desired path and actual trajectory.(c)Corresponding position error e p=[e X,e Y]T=[(r d?φ(q))X,(r d?φ(q))Y]T of end-effector tracking.

Fig.8.Motion trajectories,desired path,actual trajectory,and position-error pro?les synthesized by the IEZNN model in the presence of matrix-form time-varying noise with each element being t.(a)Motion trajectories.(b)Desired path and actual trajectory.(c)Corresponding position error e p=[e X,e Y]T=[(r d?φ(q))X,(r d?φ(q))Y]T of end-effector tracking.

are visualized in Fig.6(c),from which we can see that all the residual errors of the IEZNN model(3)with different parameters remain constant and do not have the increasing

trends.Besides,Fig.6(d)shows the simulation results under matrix-form random noise,from which we can?nd that the

residual errors remain at an order of10?3.In summary,these results verify theorems presented in Section III and further show the superiority of the proposed IEZNN model(3)for

real-time-varying matrix inversion(1)once again.

C.Example3(Inverse Kinematic Motion Planning)

In this part,we consider the inverse kinematic motion

planning of a planar robot manipulator using IEZNN to demonstrate its potential in real applications.The robot used in this example is a two-link planar robot manipulator shown

in[4].For such a robot,its joint-angle vector is denoted by q=[q1,q2]T∈R2.Then,we have the following pointwise linear relation between the desired end-effector Cartesian

velocity˙r d(t)and the joint velocity˙q(t):

˙r d(t)=J(t)˙q(t)(15) where J(t)is the Jacobian matrix de?ned as J(t)=?φ(q)/?q,withφ(·)denoting the forward-kinematic mapping.For(15),we can obtain analytically the solution to the problem of inverse-kinematics:˙q(t)=J?1(t)˙r d(t),where J?1(t)denotes the inverse of the time-varying Jacobian matrix.Evidently,for the purpose of the robot motion planning,we need to obtain J?1(t)in real time t. We can reexploit the IEZNN model(3)and the CZNN model(4)to solve J?1(t)online.The corresponding IEZNN model for inverse kinematic motion planning of robot arm (15)is

?

??

??

˙q(t)=X(t)˙r d(t)

J(t)˙X(t)=?˙J(t)X(t)?γ(J(t)X(t)?I)

t

(J(τ)X(τ)?I)dτ.

Besides,the corresponding CZNN model for inverse kinematic motion planning of robot arm(15)is

˙q(t)=X(t)˙r d(t)

J(t)˙X(t)=?˙J(t)X(t)?γ(J(t)X(t)?I).

In the simulation,the lengths of the links are set as l1=l2=1m,and the initial joint state q(0)=[π/3,π/4]T rad.In addition,the end effector is expected to track a square path with the side length being0.5m,and the motion-task duration is40s.Besides,λ=γ=100.The corresponding simulation results are shown in Figs.7and8. Speci?cally,it can be seen from Fig.7(a)and(b)that the actual trajectory generated by applying the CZNN model

12IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

for real-time matrix inversion cannot track the desired square path accurately in the presence of additive noises.As shown in Fig.7(c),the maximal position tracking error generated by the CZNN model is as large as0.15m.

The motion trajectories synthesized by the IEZNN model are shown in Fig.8(a).It can be seen from Fig.8(b)that the actual trajectory generated by the IEZNN model is very close to the desired path and the given task is ful?lled well.The maximal position error visualized in Fig.8(c) is8×10?5m,which is several orders lower than the result of the CZNN model with an error of0.15m.This appli-cation veri?es theorems presented in Section III and further demonstrates the ef?cacy and superiority of the proposed IEZNN model for solving the time-varying matrix inversion problem of robot manipulators.Therefore,it is worth applying the IEZNN model to the inverse kinematic motion planning in noisy environments.

V.C ONCLUSION

To handle noises when solving a time-varying problem, a novel IEZNN design formula has been proposed in this paper.Then,the IEZNN model(3)has been proposed and investigated for real-time-varying matrix inversion based on the novel IEZNN design formula.For comparison,the CZNN model(4)and the GNN model(5)have been employed for the same solving task.In addition,theoretical analyses have shown that the proposed IEZNN model(3)has the globally exponential convergence property.Moreover,in the presence of various kinds of noises,the proposed IEZNN model(3)has been proven to have an improved performance.That is,the proposed IEZNN model converges to the theoretical solution of the time-varying matrix inversion problem no matter how large the matrix-form constant noise is,and is highly tolerant to various kinds of noises.Finally,three illustrative simulation examples have been provided and analyzed to substantiate the ef?cacy and superiority of the proposed IEZNN model(3)for real-time-varying matrix inversion.Before ending this section as well as this paper,it is worth mentioning that it is the ?rst time to propose such a novel model for time-varying problem solving with superior robustness against various kinds of noises,which is a major breakthrough in the ZNN research as well as in the research of time-varying problem solving.

R EFERENCES

[1]Y.Wang and H.Leib,“Sphere decoding for MIMO systems with

Newton iterative matrix inversion,”IEEE Commun.Lett.,vol.17,no.2, pp.389–392,Feb.2013.

[2] B.Gu and V.S.Sheng,“Feasibility and?nite convergence analysis for

accurate on-lineν-support vector machine,”IEEE Trans.Neural Netw.

Learn.Syst.,vol.24,no.8,pp.1304–1315,Aug.2013.

[3]Y.Zhang and S.S.Ge,“Design and analysis of a general recurrent

neural network model for time-varying matrix inversion,”IEEE Trans.

Neural Netw.,vol.16,no.6,pp.1477–1490,Nov.2005.

[4] D.Guo and Y.Zhang,“Zhang neural network,Getz–Marsden dynamic

system,and discrete-time algorithms for time-varying matrix inversion with application to robots’kinematic control,”Neurocomputing,vol.97, pp.22–32,Nov.2012.

[5]L.Ma,K.Dickson,J.McAllister,and J.McCanny,“QR decomposition-

based matrix inversion for high performance embedded MIMO receivers,”IEEE Trans.Signal Process.,vol.59,no.4,pp.1858–1867, Apr.2011.

[6]J.H.Wilkinson,“Error analysis of direct methods of matrix inversion,”

J.ACM,vol.8,no.3,pp.281–330,Jul.1961.

[7] F.C.Chang,“Inversion of a perturbed matrix,”Appl.Math.Lett.,vol.19,

no.2,pp.169–173,Feb.2006.

[8]W. E.Leithead and Y.Zhang,“O(N2)-operation approximation

of covariance matrix inverse in Gaussian process regression based on quasi–Newton BFGS method,”https://www.sodocs.net/doc/f56966435.html,put., vol.36,no.2,pp.367–380,Mar.2007.

[9]Y.Zhang,D.Jiang,and J.Wang,“A recurrent neural network for solving

Sylvester equation with time-varying coef?cients,”IEEE Trans.Neural Netw.,vol.13,no.5,pp.1053–1063,Sep.2002.

[10]Y.Zhang,K.Chen,and H.-Z.Tan,“Performance analysis of gradient

neural network exploited for online time-varying matrix inversion,”IEEE Trans.Autom.Control,vol.54,no.8,pp.1940–1945,Aug.2009. [11]Y.Chen,C.Yi,and D.Qiao,“Improved neural solution for the Lyapunov

matrix equation based on gradient search,”Inf.Process.Lett.,vol.13, nos.22–24,pp.876–881,Nov./Dec.2013.

[12] C.Yi,Y.Chen,and Z.Lu,“Improved gradient-based neural networks

for online solution of Lyapunov matrix equation,”Inf.Process.Lett., vol.111,no.16,pp.780–786,Aug.2011.

[13]J.Na,X.Ren,and D.Zheng,“Adaptive control for nonlinear pure-

feedback systems with high-order sliding mode observer,”IEEE Trans.

Neural Netw.Learn.Syst.,vol.24,no.3,pp.370–382,Mar.2013. [14]J.Wang,“A recurrent neural network for real-time matrix inversion,”

https://www.sodocs.net/doc/f56966435.html,put.,vol.55,no.1,pp.89–100,Apr.1993.

[15]P.S.Stanimirovi′c,I.S.?ivkovi′c,and Y.Wei,“Recurrent neural network

for computing the Drazin inverse,”IEEE Trans.Neural Netw.Learn.

Syst.,vol.26,no.11,pp.2830–2843,Nov.2015.

[16]P.S.Stanimirovi′c,I.S.?ivkovi′c,and Y.Wei,“Recurrent neural network

approach based on the integral representation of the Drazin inverse,”

Neural Comput.,vol.27,no.10,pp.2107–2131,Oct.2015.

[17]Q.Liu and J.Wang,“A one-layer projection neural network for non-

smooth optimization subject to linear equalities and bound constraints,”

IEEE Trans.Neural Netw.Learn.Syst.,vol.24,no.5,pp.812–824, May2013.

[18]Z.Guo,Q.Liu,and J.Wang,“A one-layer recurrent neural network for

pseudoconvex optimization subject to linear equality constraints,”IEEE Trans.Neural Netw.,vol.22,no.12,pp.1892–1900,Dec.2011. [19]Y.Shen and J.Wang,“Almost sure exponential stability of recurrent

neural networks with Markovian switching,”IEEE Trans.Neural Netw., vol.20,no.5,pp.840–855,May2009.

[20]Y.Shen and J.Wang,“Robustness analysis of global exponential stability

of recurrent neural networks in the presence of time delays and random disturbances,”IEEE Trans.Neural Netw.Learn.Syst.,vol.23,no.1, pp.83–96,Jan.2012.

[21]R.Rakkiyappan,J.Cao,and G.Velmurugan,“Existence and uniform

stability analysis of fractional-order complex-valued neural networks with time delays,”IEEE Trans.Neural Netw.Learn.Syst.,vol.26,no.1, pp.84–97,Jan.2015.

[22]R.Rakkiyappan,A.Chandrasekar,and J.Cao,“Passivity and passi?ca-

tion of memristor-based recurrent neural networks with additive time-varying delays,”IEEE Trans.Neural Netw.Learn.Syst.,vol.26,no.9, pp.2043–2057,Sep.2015.

[23]S.Li,B.Liu,and Y.Li,“Selective positive–negative feedback produces

the winner-take-all competition in recurrent neural networks,”IEEE Trans.Neural Netw.Learn.Syst.,vol.24,no.2,pp.301–309,Feb.2013.

[24]H.Zhang,Z.Wang,and D.Liu,“A comprehensive review of stability

analysis of continuous-time recurrent neural networks,”IEEE Trans.

Neural Netw.Learn.Syst.,vol.25,no.7,pp.1229–1262,Jul.2014. [25]K.Chen,“Recurrent implicit dynamics for online matrix inver-

sion,”https://www.sodocs.net/doc/f56966435.html,put.,vol.219,no.20,pp.10218–10224, Jun.2013.

[26] D.Guo and Y.Zhang,“Zhang neural network for online solution of

time-varying linear matrix inequality aided with an equality conversion,”

IEEE Trans.Neural Netw.Learn.Syst.,vol.25,no.2,pp.370–382, Feb.2014.

[27]L.Jin and Y.Zhang,“Discrete-time Zhang neural network for online

time-varying nonlinear optimization with application to manipulator motion generation,”IEEE Trans.Neural Netw.Learn.Syst.,vol.26, no.7,pp.1525–1531,Jul.2015.

[28]J.Liang,Z.Wang,Y.Liu,and X.Liu,“Robust synchronization of an

array of coupled stochastic discrete-time delayed neural networks,”IEEE Trans.Neural Netw.,vol.19,no.11,pp.1910–1921,Nov.2008. [29]R.A.Horn and C.R.Johnson,Topics in Matrix Analysis.Cambridge,

U.K:Cambridge Univ.Press,1991.

JIN et al.:IEZNN FOR REAL-TIME-V ARYING MATRIX INVERSION13 [30]J.Lian and J.Wang,“Passivity of switched recurrent neural networks

with time-varying delays,”IEEE Trans.Neural Netw.Learn.Syst.,

vol.26,no.2,pp.357–366,Feb.2015.

[31] B.Ren,S.S.Ge,K.P.Tee,and T.H.Lee,“Adaptive neural control for

output feedback nonlinear systems using a barrier Lyapunov function,”

IEEE Trans.Neural Netw.,vol.21,no.8,pp.1339–1345,Aug.2010.

[32]Z.Zhang and Y.Zhang,“Design and experimentation of acceleration-

level drift-free scheme aided by two recurrent neural networks,”IET

Control Theory Appl.,vol.7,no.1,pp.25–42,Jan.2013.

[33] A.V.Oppenheim and A.S.Willsky,Signals and Systems.

Englewood Cliffs,NJ,USA:Prentice-Hall,1997.

[34]R.G.Brown and P.Y.C.Hwang,Introduction to Random Signals and

Applied Kalman Filtering.New York,NY,USA:Wiley,1996.

[35] D.H.Dini and D.P.Mandic,“Class of widely linear complex

Kalman?lters,”IEEE Trans.Neural Netw.Learn.Syst.,vol.23,no.5,

pp.775–786,May2012.

[36] D.V.Prokhorov,“Training recurrent neurocontrollers for robustness with

derivative-free Kalman?lter,”IEEE Trans.Neural Netw.,vol.17,no.6,

pp.1606–1616,Nov.2006.

[37]J.H.Mathews and K.K.Fink,Numerical Methods Using Matlab.

Englewood Cliffs,NJ,USA:Prentice-Hall,

2004.

Long Jin received the B.S.degree in automation from Sun Yat-sen University(SYSU),Guangzhou, China,in2011,where he is currently pursuing the Ph.D.degree in information and communication engineering with the School of Information Science and Technology.

He is also with the SYSU–CMU Shunde Interna-tional Joint Research Institute,Foshan,China,for cooperative research.His current research interests include neural networks,robotics,and intelligent information

processing.

Yunong Zhang(S’02–M’03)received the

B.S.degree from the Huazhong University of

Science and Technology,Wuhan,China,in1996,

the M.S.degree from the South China University

of Technology,Guangzhou,China,in1999,and

the Ph.D.degree from the Chinese University of

Hong Kong,Hong Kong,in2003.

He had been with the National University of

Ireland,Maynooth,Ireland,the University of

Strathclyde,Glasgow,U.K.,and the National

University of Singapore,Singapore,since2003. He joined Sun Yat-sen University(SYSU),Guangzhou,China,in2006, where he is currently a Professor with the School of Information Science and Technology.He is also with the SYSU–CMU Shunde International Joint Research Institute,Foshan,China,for cooperative research.His current research interests include robotics,neural networks,computation,and

optimization.

Shuai Li(M’14)received the B.E.degree in

precision mechanical engineering from the

Hefei University of Technology,Hefei,China,

in2005,the M.E.degree in automatic control

engineering from the University of Science and

Technology of China,Hefei,in2008,and the

Ph.D.degree in electrical and computer engineering

from the Stevens Institute of Technology,Hoboken,

NJ,USA,in2014.

He is currently a Research Assistant Professor

with the Department of Computing,The Hong Kong Polytechnic University,Hong Kong.His current research interests include dynamic neural networks,wireless sensor networks,robotic networks, machine learning,and other dynamic problems de?ned on a graph.

Prof.Li is on the Editorial Board of the International Journal of Distributed Sensor Networks.

相关主题