^{1}

^{*}

^{1}

This paper studies the distributed synchronization control problem of a class of stochastic dynamical systems with time-varying delays and random noise via randomly occurring control. The activation of the distributed adaptive controller and the update of the control gain designed in this paper all happen randomly. Based on the Lyapunov stability theory, LaSalle invariance principle, combined with the use of the properties of the matrix Kronecker product, stochastic differential equation theory and other related tools, by constructing the appropriate Lyapunov functional, the criterion for the distributed synchronization of this type of stochastic complex networks in mean square is obtained.

In the human society and the objective world in which human society lives, complex networks can be seen everywhere, and there are various complex networks developed and constructed by humans intentionally or unintentionally: interpersonal networks, road networks, railway networks, power networks, the Internet, and news dissemination networks; there are also various non-human complex networks that exist in the objective world: ecological networks, biological neural networks, metabolic networks, animal and plant gene networks, and so on. Due to the rapid development of human science and technology, the rapid development of global integration, and the huge investment in various public infrastructures from all over the world, the construction and cognition of various complex networks have achieved unprecedented rapid development. Although this rapid development has greatly improved people's work efficiency and quality of life, such as the popularization of mobile communication networks, social software and GPS positioning and navigation. However, this also brings a variety of new or potential risks to people, such as infectious diseases relying on the rapid spread of complex networks, large-scale power outages caused by power grid failures, and huge losses caused by computer viruses relying on the rapid spread of high-speed networks, etc. Taking into account the various existing and potential important applications, in order to make complex networks better serve humans and eliminate related network risks, the theory and applications of complex networks have gradually become very popular in scientific research in the past few decades. In the field of research, its research has very important application value and academic significance.

Synchronization [

Random phenomenon is a common phenomenon in nature. Many practical systems are inevitably affected by it. For example, signals in complex dynamic networks are often affected by network bandwidth, conductive media, measurement noise, etc., when they are transmitted between nodes in the network. The influence of random factors causes information to be randomly lost or incomplete [

In summary, complex network systems will inevitably be affected by random factors such as so-called time-delay and external disturbance. Common delay factors that cause time-delay include: transmission, calculation, communication, and driving; common external disturbance factors include environmental noise, internal random failures and external attacks, etc. In some cases, the stability of the system may be greatly affected in these scenarios, or even completely deviate from the previous plan. Therefore, when considering the synchronization of complex networks, it is necessary to add the above-mentioned transmission delay and other random factors such as time-delay and environmental noise to the model, and design a more robust controller to suppress the negative effects of time-delay and noise to obtain better control accuracy. Although the reference [

Based on the above point of view, this article generalizes the complex network model proposed in [

Notations: Let ℝ ( ℝ + ) denote the set of real (positive) numbers. ℝ n and ℝ n × m denote, respectively, the n-dimensional Euclidean space and the set of all n × m -dimensional real matrices.

A > 0 denotes that the matrix A is a symmetric and positive definite matrix. The notation A T is the transpose of a vector or matrix A. I represents the identity matrix with appropriate dimensions. ‖ A ‖ denotes the Euclidean norm of a matrixA and λ max ( A ) (respectively, λ min ( A ) ) denotes the maximum (respectively, minimum) eigenvalue of matrix A. The symbol t r a c e ( A ) represents the trace of square matrix A = ( a i j ) n × n , i.e. t r a c e ( A ) = ∑ i = 1 n a i i . ⊗ stands for the Kronecker product. And E { X } represents the expectation of the random variable X. Define a graph by G = [ V , E ] , where V = { 1 , ⋯ , N } denotes the vertex set and E = { e ( i , j ) } denotes the edge set. N ( i ) denotes the neighborhood of vertex i in the sense N ( i ) = { j ∈ V : e ( i , j ) ∈ E } . In this paper, graph G is supposed to be undirected [ e ( i , j ) ∈ E implies e ( j , i ) ∈ E ] and simple (without self-loops and multiple edges). Let L = [ l i j ] i , j = 1 N be the Laplacian matrix of graph G , which is defined as follows: for any pair i ≠ j , l i j = l j i = − 1 if e ( i , j ) ∈ E and l i j = l j i = 0 otherwise. l i i = − ∑ j = 1 , j ≠ i N l i j stands for the degree of vertex i ( i = 1 , 2 , ⋯ , N ) . Let ( Ω , F , P ) be a complete probability space, where Ω represents a sample space, F is called a σ -algebra, and P is a probability measure.

In this paper, we consider the following model of a complex network stochastic system with time-varying delays, which can be expressed as:

d x i ( t ) = { A x i ( t ) + f ( t , x i ( t ) , x i ( t − τ ( t ) ) ) + c ∑ j ∈ N ( i ) Γ [ x j ( t − τ ( t ) ) − x i ( t − τ ( t ) ) ] } d t + σ ( t , x i ( t ) , x i ( t − τ ( t ) ) ) d w ( t ) , i = 1 , 2 , ⋯ , N (1)

where x i ( t ) = [ x i 1 ( t ) , x i 2 ( t ) , ⋯ , x i n ( t ) ] T ∈ ℝ n ( i = 1 , 2 , ⋯ , N ) is the state vector of the ith vertex, A ∈ ℝ n × n is a constant matrix, f ( t , x i ( t ) , x i ( t − τ ( t ) ) ) : ℝ × ℝ n × ℝ n → ℝ n is a continuous nonlinear vector-valued function. The positive constant c > 0 is the coupling strength of the network. The inner coupling matrix Γ = d i a g { η 1 , η 2 , ⋯ , η n } > 0 is a constant diagonal matrix. τ ( t ) is a time-varying delay. Furthermore, σ ( t , x i ( t ) , x i ( t − τ ( t ) ) ) : ℝ × ℝ n × ℝ n → ℝ n is the noise intensity function vector, and w ( t ) is a scalar Brownian motion defined on ( Ω , F , P ) satisfying E { d w ( t ) } = 0 and E { [ d w ( t ) ] 2 } = d t .

According to the definition and properties of the above Laplacian matrix L = [ l i j ] i , j = 1 N , it is easy to know that Formula (1) can be rewritten as

d x i ( t ) = { A x i ( t ) + f ( t , x i ( t ) , x i ( t − τ ( t ) ) ) − c ∑ j = 1 N l i j Γ x j ( t − τ ( t ) ) } d t + σ ( t , x i ( t ) , x i ( t − τ ( t ) ) ) d w ( t ) , i = 1 , 2 , ⋯ , N (2)

Additionally, From the Gershgorin disk theorem, all the eigenvalues of the Laplacian matrix L corresponding to graph G satisfy 0 = λ 1 ( L ) ≤ λ 2 ( L ) ≤ ⋯ ≤ λ N ( L ) . furthermore, G is connected if and only if λ 2 ( L ) > 0 : i.e., L is irreducible.

In order to achieve the synchronization of the stochastic complex network in (1) or (2), controllers are added to each vertex.

d x i ( t ) = { A x i ( t ) + f ( t , x i ( t ) , x i ( t − τ ( t ) ) ) + c ∑ j ∈ N ( i ) Γ [ x j ( t − τ ( t ) ) − x i ( t − τ ( t ) ) ] + u i ( t ) } d t + σ ( t , x i ( t ) , x i ( t − τ ( t ) ) ) d w ( t ) , i = 1 , 2 , ⋯ , N (3)

where u i ( t ) is a distributed adaptive controller.

For the ith vertex, u i ( t ) is designed as

u i ( t ) = ρ ( t ) ∑ j ∈ N ( i ) ε i ( t ) Γ ( x j ( t ) − x i ( t ) ) , i = 1 , 2 , ⋯ , N (4)

where ε i ( t ) is the control strength of vertex i.

In (4), ρ ( t ) is a Bernoulli stochastic variable that describes the following random events for (3):

{ Event 1 : ( 3 ) experiences ( 4 ) Event 2 : ( 3 ) doesnot experience ( 4 ) (5)

Let ρ ( t ) be defined by

ρ ( t ) = { 1 , ifEvent1occurs 0 , ifEvent2occurs (6)

where the probability of event { ρ ( t ) = 1 } is Pr { ρ ( t ) = 1 } = ρ ∈ [ 0 , 1 ] , so the expectation of random variable ρ ( t ) is E { ρ ( t ) } = ρ .

The distributed controller (4) in this paper takes stochastic disturbances into account and uses a Bernoulli stochastic variable to simulate this random disturbance. The distributed controller u i ( t ) occurs in a probabilistic manner and uses feedback information from neighboring points. Different from the conventional adaptive controller, the distributed controller u i ( t ) is not always implemented and it can model control failure in a stochastic way. In short, randomly occurring distributed control can effectively use the information of neighboring points to simulate real-world disturbances.

ε i ( t ) in (4) is updated according to the following randomly occurring distributed updating law:

d ε i ( t ) = ξ ( t ) α [ ∑ j ∈ N ( i ) ( x j ( t ) − x i ( t ) ) ] T Γ [ ∑ j ∈ N ( i ) ( x j ( t ) − x i ( t ) ) ] d t , i = 1 , 2 , ⋯ , N (7)

where α > 0 and ξ ( t ) is a Bernoulli stochastic variable representing the following random events for (7):

{ Event 3 : ε i ( t ) experiences ( 7 ) Event 4 : ε i ( t ) doesnot experience ( 7 ) (8)

Similarly, Let ξ ( t ) be defined by

ξ ( t ) = { 1 , ifEvent3occurs 0 , ifEvent4occurs (9)

where the probability of event { ξ ( t ) = 1 } is Pr { ξ ( t ) = 1 } = ξ ∈ [ 0 , 1 ] , so the expectation of stochastic variable ξ ( t ) is E { ξ ( t ) } = ξ .

Remark 1: If ρ = 1 and ξ = 1 , the control and updating rule will be simplified to normal control and updating law. If ρ = 0 and ξ = 0 , the problem considered in this article will be simplified to the synchronization of complex networks without controllers.

It can be seen from the above model that the complex network studied in this paper has the following characteristics:

1) The model contains random terms that characterize environmental noise.

2) The activation of the controller and the updating law of control gain both occur in a probabilistic manner, and the distributed synchronization of stochastic complex networks is studied by considering the random occurrence of control and update laws.

3) Considering the effect of time-delays, and the time-delays are time-varying, the model is more general.

The following definition, assumptions and lemmas are needed for deriving the main results.

Definition 1 [

lim t → ∞ E ∑ i = 1 N ‖ x i ( t ) − x j ( t ) ‖ 2 = 0 , i , j = 1 , 2 , ⋯ , N (10)

then the stochastic complex network is said to achieve synchronization in mean square.

Lemma 1 (Itô formula) [

d x ( t ) = f ( t ) d t + g ( t ) d w ( t )

Let V ( x , t ) be a real-valued function, which is continuously twice differentiable in x and once differentiable in t. Then V ( x ( t ) , t ) is again an Itô process with the stochastic differential given by

d V ( x ( t ) , t ) = [ V t ( x ( t ) , t ) + V x ( x ( t ) , t ) f ( t ) + 1 2 t r a c e ( g T ( t ) V x x ( x ( t ) , t ) g ( t ) ) ] d t + V x ( x ( t ) , t ) g ( t ) d w ( t ) (11)

Lemma 2 [

± 2 x T y ≤ x T P x + y T P y . (12)

Lemma 3 [

λ min ( P ) ‖ x ‖ 2 ≤ x T P x ≤ λ max ( P ) ‖ x ‖ 2 . (13)

Assumption 1 [

‖ f ( t , ξ 1 ( t ) , ξ 1 ( t − τ ( t ) ) ) − f ( t , ξ 2 ( t ) , ξ 2 ( t − τ ( t ) ) ) ‖ 2 ≤ β 1 ‖ ξ 1 ( t ) − ξ 2 ( t ) ‖ 2 + β 2 ‖ ξ 1 ( t − τ ( t ) ) − ξ 2 ( t − τ ( t ) ) ‖ 2 (14)

Which holds for all ξ 1 ( t ) , ξ 2 ( t ) ∈ ℝ n and t > 0 .

Assumption 2 [

t r a c e [ ( σ ( t , ξ 1 , η 1 ) − σ ( t , ξ 2 , η 2 ) ) T ( σ ( t , ξ 1 , η 1 ) − σ ( t , ξ 2 , η 2 ) ) ] ≤ γ 1 ( ξ 1 − ξ 2 ) T ( ξ 1 − ξ 2 ) + γ 2 ( η 1 − η 2 ) T ( η 1 − η 2 ) (15)

Which holds for all ξ 1 ( t ) , ξ 2 ( t ) , η 1 ( t ) , η 2 ( t ) ∈ ℝ n and t > 0 .

Assumption 3: The time-varying delay τ ( t ) is a bounded continuous differentiable function, which satisfies: 0 ≤ τ ˙ ( t ) ≤ τ ¯ < 1 . where τ ˙ ( t ) represents the derivative of τ ( t ) to t.

In this section, we will derive the main results of the distributed synchronous control of a class of stochastic dynamical systems (3) with time-varying delays and random noise via randomly occurring control and updating law.

Theorem 1: Suppose that the nonlinear function f ( ⋅ , ⋅ , ⋅ ) in the stochastic complex network (3) satisfies Assumption 1, the noise intensity function σ ( ⋅ , ⋅ , ⋅ ) satisfies Assumption 2, and the time-varying delays τ ( t ) satisfies Assumption 3, then the stochastic complex network (3) will achieve synchronization in mean square under the distributed adaptive controller (4) and updating law (7), if

[ 1 2 ( β 1 + β 2 + γ 1 + γ 2 + 1 ) + λ max ( A T A ) ] I N < [ ρ b η − c 2 − P τ ¯ − c 2 λ max ( Γ Γ T ) ] L (16)

Proof: Let e i j ( t ) = x i ( t ) − x j ( t ) , ∀ i , j = 1 , 2 , ⋯ , N and x = [ x 1 T , x 2 T , ⋯ , x N T ] T ∈ ℝ n N .

Consider the following Lyapunov function

V ( t ) = V 1 ( t ) + V 2 ( t ) + V 3 ( t ) (17)

where

V 1 ( t ) = 1 4 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T e i j (18)

V 2 ( t ) = ρ 2 ξ α ∑ i = 1 N ( ε i ( t ) − b ) 2 (19)

V 3 ( t ) = ∑ i = 1 N ∫ t − τ ( t ) t [ ∑ j ∈ N ( i ) e i j ( s ) ] T P [ ∑ j ∈ N ( i ) e i j ( s ) ] d s (20)

where b is a positive constant.

According to (3) and (4) we can easily obtain

d e i j ( t ) = d x i ( t ) − d x j ( t ) = { A [ x i ( t ) − x j ( t ) ] + [ f ( t , x i ( t ) , x i ( t − τ ( t ) ) ) − f ( t , x j ( t ) , x j ( t − τ ( t ) ) ) ] + c ∑ k ∈ N ( i ) Γ [ x k ( t − τ ( t ) ) − x i ( t − τ ( t ) ) ] − c ∑ m ∈ N ( j ) Γ [ x m ( t − τ ( t ) ) − x j ( t − τ ( t ) ) ] + ρ ( t ) ∑ k ∈ N ( i ) ε i ( t ) Γ [ x k ( t ) − x i ( t ) ] − ρ ( t ) ∑ m ∈ N ( j ) ε j ( t ) Γ [ x m ( t ) − x j ( t ) ] } d t + [ σ ( t , x i ( t ) , x i ( t − τ ( t ) ) ) − σ ( t , x j ( t ) , x j ( t − τ ( t ) ) ) ] d w ( t )

= { A e i j ( t ) + f ˜ ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) − 2 c ∑ j ∈ N ( i ) Γ e i j ( t − τ ( t ) ) − 2 ρ ( t ) ε i ( t ) Γ ∑ j ∈ N ( i ) e i j ( t ) } d t + σ ˜ ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) d w ( t ) (21)

where

f ˜ ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) = f ( t , x i ( t ) , x i ( t − τ ( t ) ) ) − f ( t , x j ( t ) , x j ( t − τ ( t ) ) )

σ ˜ ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) = σ ( t , x i ( t ) , x i ( t − τ ( t ) ) ) − σ ( t , x j ( t ) , x j ( t − τ ( t ) ) )

By the Lemma 1 (Itô formula), the stochastic derivative of V ( t ) can be obtained as

d V ( t ) = L V ( t ) d t + 1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T σ ˜ ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) d w ( t ) (22)

and according to (21), the Itô differential operator L is given as

L V ( t ) = 1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T ( t ) { A e i j ( t ) + f ˜ ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) − 2 c ∑ j ∈ N ( i ) Γ e i j ( t − τ ( t ) ) − 2 ρ ( t ) ε i ( t ) Γ ∑ j ∈ N ( i ) e i j } + ∑ i = 1 N ρ ξ ε i ( t ) ⋅ ξ ( t ) [ ∑ j ∈ N ( i ) e i j ] T Γ [ ∑ j ∈ N ( i ) e i j ] − ∑ i = 1 N ρ ξ b ⋅ ξ ( t ) [ ∑ j ∈ N ( i ) e i j ] T Γ [ ∑ j ∈ N ( i ) e i j ] + ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ( t ) ] T P [ ∑ j ∈ N ( i ) e i j ( t ) ] − ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ( t − τ ( t ) ) ] T P [ ∑ j ∈ N ( i ) e i j ( t − τ ( t ) ) ] ⋅ ( 1 − τ ˙ ( t ) )

+ 1 4 ∑ i = 1 N ∑ j ∈ N ( i ) σ ˜ T ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) σ ˜ ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) = 1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T ( t ) A e i j ( t ) + 1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T ( t ) f ˜ ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) − c ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ( t ) ] T Γ [ ∑ j ∈ N ( i ) e i j ( t − τ ( t ) ) ] − ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ( t ) ] T ρ ( t ) ε i ( t ) Γ [ ∑ j ∈ N ( i ) e i j ( t ) ]

+ ∑ i = 1 N ρ ξ ε i ( t ) ⋅ ξ ( t ) [ ∑ j ∈ N ( i ) e i j ] T Γ [ ∑ j ∈ N ( i ) e i j ] − ∑ i = 1 N ρ ξ b ⋅ ξ ( t ) [ ∑ j ∈ N ( i ) e i j ] T Γ [ ∑ j ∈ N ( i ) e i j ] + ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ( t ) ] T P [ ∑ j ∈ N ( i ) e i j ( t ) ] − ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ( t − τ ( t ) ) ] T P [ ∑ j ∈ N ( i ) e i j ( t − τ ( t ) ) ] ⋅ ( 1 − τ ˙ ( t ) ) + 1 4 ∑ i = 1 N ∑ j ∈ N ( i ) σ ˜ T ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) σ ˜ ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) (23)

Taking expectations of ρ ( t ) and ξ ( t ) , we obtain E [ ρ ( t ) ] = ρ and E [ ρ ξ ξ ( t ) ] = ρ , furthermore

E L V ( t ) = E { 1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T ( t ) A e i j ( t ) + 1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T ( t ) f ˜ ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) − c ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ( t ) ] T Γ [ ∑ j ∈ N ( i ) e i j ( t − τ ( t ) ) ]

− ρ b ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ] T Γ [ ∑ j ∈ N ( i ) e i j ] + ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ( t ) ] T P [ ∑ j ∈ N ( i ) e i j ( t ) ] − ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ( t − τ ( t ) ) ] T P [ ∑ j ∈ N ( i ) e i j ( t − τ ( t ) ) ] ⋅ ( 1 − τ ˙ ( t ) ) + 1 4 ∑ i = 1 N ∑ j ∈ N ( i ) σ ˜ T ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) σ ˜ ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) } (24)

From the definitions of e i j , x and the Laplacian matrix L, we get

1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T e i j = x T ( L ⊗ I n ) x (25)

∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ] T Γ [ ∑ j ∈ N ( i ) e i j ] = x T ( L 2 ⊗ Γ ) x (26)

Then, using Lemma 3 and (25), it’s not hard to get

1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T ( t ) A e i j ( t ) ≤ λ max ( A T A ) ⋅ 1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T e i j = λ max ( A T A ) ⋅ x T ( L ⊗ I n ) x (27)

By Lemma 2 and 3, Assumptions 1, (25), and (26), the following inequalities can be obtained:

1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T ( t ) f ˜ ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) ≤ 1 2 ( β 1 + 1 ) ⋅ 1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T ( t ) e i j ( t ) + 1 2 β 2 ⋅ 1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T ( t − τ ( t ) ) e i j ( t − τ ( t ) ) ≤ 1 2 ( β 1 + 1 ) ⋅ x T ( L ⊗ I n ) x + 1 2 β 2 ⋅ x T ( t − τ ( t ) ) ( L ⊗ I n ) x ( t − τ ( t ) ) (28)

and

− c ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ] T Γ [ ∑ j ∈ N ( i ) e i j ( t − τ ( t ) ) ] ≤ c 2 λ max ( Γ Γ T ) ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ] T [ ∑ j ∈ N ( i ) e i j ] + c 2 ⋅ ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ( t − τ ( t ) ) ] T [ ∑ j ∈ N ( i ) e i j ( t − τ ( t ) ) ] = c 2 λ max ( Γ Γ T ) ⋅ [ x T ( L 2 ⊗ I n ) x ] + c 2 ⋅ [ x T ( t − τ ( t ) ) ( L 2 ⊗ I n ) x ( t − τ ( t ) ) ] (29)

Besides, by using Assumptions 2 and 3, (25), and (26), one obtains that

− ∑ i = 1 N [ ∑ j ∈ N ( i ) e i j ( t − τ ( t ) ) ] T P [ ∑ j ∈ N ( i ) e i j ( t − τ ( t ) ) ] ⋅ ( 1 − τ ˙ ( t ) ) = − ( 1 − τ ˙ ( t ) ) ⋅ x T ( t − τ ( t ) ) ( P L 2 ⊗ I n ) x ( t − τ ( t ) ) ≤ − ( 1 − τ ¯ ) ⋅ x T ( t − τ ( t ) ) ( P L 2 ⊗ I n ) x ( t − τ ( t ) ) (30)

and

1 4 ∑ i = 1 N ∑ j ∈ N ( i ) σ ˜ T ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) σ ˜ ( t , e i j ( t ) , e i j ( t − τ ( t ) ) ) ≤ 1 4 ∑ i = 1 N ∑ j ∈ N ( i ) [ γ 1 e i j T ( t ) e i j ( t ) + γ 2 e i j T ( t − τ ( t ) ) e i j ( t − τ ( t ) ) ] = 1 2 γ 1 ⋅ 1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T ( t ) e i j ( t ) + 1 2 γ 2 ⋅ 1 2 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T ( t − τ ( t ) ) e i j ( t − τ ( t ) ) = 1 2 γ 1 ⋅ x T ( L ⊗ I n ) x + 1 2 γ 2 ⋅ x T ( t − τ ( t ) ) ( L ⊗ I n ) x ( t − τ ( t ) ) (31)

Combining the above results, and substitute (26) - (31) into (24), we have

E L V ( t ) ≤ E { λ max ( A T A ) ⋅ x T ( L ⊗ I n ) x + 1 2 ( β 1 + 1 ) ⋅ x T ( L ⊗ I n ) x + 1 2 β 2 ⋅ x T ( t − τ ( t ) ) ( L ⊗ I n ) x ( t − τ ( t ) ) + c 2 λ max ( Γ Γ T ) ⋅ [ x T ( L 2 ⊗ I n ) x ] + c 2 ⋅ [ x T ( t − τ ( t ) ) ( L 2 ⊗ I n ) x ( t − τ ( t ) ) ] − ρ b x T ( L 2 ⊗ Γ ) x + x T ( P L 2 ⊗ I n ) x − ( 1 − τ ¯ ) ⋅ x T ( t − τ ( t ) ) ( P L 2 ⊗ I n ) x ( t − τ ( t ) ) + 1 2 γ 1 ⋅ x T ( L ⊗ I n ) x + 1 2 γ 2 ⋅ x T ( t − τ ( t ) ) ( L ⊗ I n ) x ( t − τ ( t ) ) }

= E { x T [ ( λ max ( A T A ) I N + 1 2 ( β 1 + 1 ) I N + 1 2 γ 1 I N + c 2 λ max ( Γ Γ T ) L + P L ) L ⊗ I n ] x + x T [ ( − ρ b L ) L ⊗ Γ ] x + x T ( t − τ ( t ) ) [ ( 1 2 β 2 I N + 1 2 γ 2 I N + c 2 L − P ( 1 − τ ¯ ) L ) L ⊗ I n ] x ( t − τ ( t ) ) } (32)

Let η = min { η 1 , η 2 , ⋯ , η n } , then we have

E L V ( t ) ≤ E { x T [ ( λ max ( A T A ) I N + 1 2 ( β 1 + 1 ) I N + 1 2 γ 1 I N + c 2 λ max ( Γ Γ T ) L + P L ) L ⊗ I n ] x + x T [ ( − ρ b η L ) L ⊗ I n ] x + x T ( t − τ ( t ) ) [ ( 1 2 β 2 I N + 1 2 γ 2 I N + c 2 L − P ( 1 − τ ¯ ) L ) L ⊗ I n ] x ( t − τ ( t ) ) }

= E { x T [ ( λ max ( A T A ) I N + 1 2 ( β 1 + 1 ) I N + 1 2 γ 1 I N + c 2 λ max ( Γ Γ T ) L + P L − ρ b η L ) L ⊗ I n ] x + x T ( t − τ ( t ) ) [ ( 1 2 β 2 I N + 1 2 γ 2 I N + c 2 L − P ( 1 − τ ¯ ) L ) L ⊗ I n ] x ( t − τ ( t ) ) } = E { − x T ( t ) ∏ 1 ( L ⊗ I n ) x ( t ) + x T ( t − τ ( t ) ) ∏ 2 ( L ⊗ I n ) x ( t − τ ( t ) ) } (33)

where

∏ 1 = ρ b η L − λ max ( A T A ) I N − 1 2 ( β 1 + 1 ) I N − 1 2 γ 1 I N − c 2 λ max ( Γ Γ T ) L − P L

∏ 2 = 1 2 β 2 I N + 1 2 γ 2 I N + c 2 L − P ( 1 − τ ¯ ) L

Note that condition (16) in Theorem 1 yields

[ 1 2 ( β 1 + β 2 + γ 1 + γ 2 + 1 ) + λ max ( A T A ) ] I N < [ ρ b η − c 2 − P τ ¯ − c 2 λ max ( Γ Γ T ) ] L

we obtain

∏ 2 − ∏ 1 = [ 1 2 ( β 1 + β 2 + γ 1 + γ 2 + 1 ) + λ max ( A T A ) ] I N − [ ρ b η − c 2 − P τ ¯ − c 2 λ max ( Γ Γ T ) ] L < 0

Namely, ∏ 2 < ∏ 1 . Therefore, from the LaSalle invariance principle [

When the delay in the stochastic complex network in (3) is a constant delay, the following Corollary 1 can be obtained.

Corollary 1: Suppose that the nonlinear function f ( ⋅ , ⋅ , ⋅ ) in the stochastic complex network (3) satisfies Assumption 1, and the noise intensity function σ ( ⋅ , ⋅ , ⋅ ) satisfies Assumption 2, then the stochastic complex network (3) will achieve synchronization in mean square under the distributed adaptive controller (4) and updating law (7), if

[ 1 2 ( β 1 + β 2 + γ 1 + γ 2 + 1 ) + λ max ( A T A ) ] I N < [ ρ b η − c 2 − c 2 λ max ( Γ Γ T ) ] L (34)

Proof: The Lyapunov function constructed at this time becomes

V ( t ) = 1 4 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T e i j + ρ 2 ξ α ∑ i = 1 N ( ε i ( t ) − b ) 2 + ∑ i = 1 N ∫ t − τ t [ ∑ j ∈ N ( i ) e i j ( s ) ] T P [ ∑ j ∈ N ( i ) e i j ( s ) ] d s (35)

where b is a positive constant. The rest of the proof is similar to the proof of Theorem 1, which is omitted here.

When the stochastic complex network (3) does not contain time delay, the following simpler Corollary 2 can be obtained.

Assumption 4 [

‖ f ( t , ξ 1 ( t ) ) − f ( t , ξ 2 ( t ) ) ‖ 2 ≤ β ‖ ξ 1 ( t ) − ξ 2 ( t ) ‖ 2 (36)

Assumption 5 [

t r a c e [ ( σ ( t , ξ 1 ) − σ ( t , ξ 2 ) ) T ( σ ( t , ξ 1 ) − σ ( t , ξ 2 ) ) ] ≤ γ ( ξ 1 − ξ 2 ) T ( ξ 1 − ξ 2 ) (37)

Corollary 2: Suppose that Assumptions 4 and 5 hold，then the stochastic complex network (3) will achieve synchronization in mean square under the distributed adaptive controller (4) and updating law (7), if

[ 1 2 ( β + γ + 1 ) + λ max ( A T A ) ] I N < ( ρ b η + c ) L (38)

Proof: The Lyapunov function constructed at this time becomes

V ( t ) = 1 4 ∑ i = 1 N ∑ j ∈ N ( i ) e i j T e i j + ρ 2 ξ α ∑ i = 1 N ( ε i ( t ) − b ) 2 (39)

where b is a positive constant. The rest of the proof is similar to the proof of Theorem 1, and will not be detailed here.

In this paper, synchronization criteria were investigated for the stochastic complex networks with time-varying delays and random noise via randomly occurring control and updating law. We use two Bernoulli random variables to describe the occurrence of distributed adaptive control, and update it according to a certain probability. The adaptive control and update rules connected to the network node are only related to the state information of itself and its neighbor nodes, and distributed synchronization is realized through the feedback of the state information of neighbor nodes. Based on Lyapunov stability theory, LaSalle invariance principle, combined with the use of the properties of matrix Kronecker product, stochastic differential equation theory and other related tools, by constructing the appropriate Lyapunov functional, the sufficient conditions for the distributed synchronization of such stochastic complex networks in mean square are obtained.

This work was supported by the Fujian Provincial Department of Education Young and Middle-aged Teacher Education Research Project (JAT201031). The author would also like to thank the Editor-in-Chief, the Associate Editor, and the anonymous reviewers for their careful reading of the manuscript and constructive comments.

The authors declare no conflicts of interest regarding the publication of this paper.

Liu, X.Y. and Qiu, X.L. (2021) Distributed Synchronization of Stochastic Complex Networks with Time- Varying Delays via Randomly Occurring Control. Applied Mathematics, 12, 803-817. https://doi.org/10.4236/am.2021.129054