LaTex2Web logo

LaTeX2Web, a web authoring and publishing system

If you see this, something is wrong

Collapse and expand sections

To get acquainted with the document, the best thing to do is to select the "Collapse all sections" item from the "View" menu. This will leave visible only the titles of the top-level sections.

Clicking on a section title toggles the visibility of the section content. If you have collapsed all of the sections, this will let you discover the document progressively, from the top-level sections to the lower-level ones.

Cross-references and related material

Generally speaking, anything that is blue is clickable.

Clicking on a reference link (like an equation number, for instance) will display the reference as close as possible, without breaking the layout. Clicking on the displayed content or on the reference link hides the content. This is recursive: if the content includes a reference, clicking on it will have the same effect. These "links" are not necessarily numbers, as it is possible in LaTeX2Web to use full text for a reference.

Clicking on a bibliographical reference (i.e., a number within brackets) will display the reference.

Speech bubbles indicate a footnote. Click on the bubble to reveal the footnote (there is no page in a web document, so footnotes are placed inside the text flow). Acronyms work the same way as footnotes, except that you have the acronym instead of the speech bubble.

Discussions

By default, discussions are open in a document. Click on the discussion button below to reveal the discussion thread. However, you must be registered to participate in the discussion.

If a thread has been initialized, you can reply to it. Any modification to any comment, or a reply to it, in the discussion is signified by email to the owner of the document and to the author of the comment.

Table of contents

First published on Monday, May 19, 2025 and last modified on Monday, May 19, 2025 by François Chaplais.

Policy Gradient Adaptive Control for the LQR: Indirect and Direct Approaches
arXiv
Published version: 10.48550/arXiv.2505.03706

Feiran Zhao Department of Information Technology and Electrical Engineering, ETH Zürich, 8092 Zürich, Switzerland Email

Alessandro Chiuso Department of Information Engineering, University of Padova, Via Gradenigo 6/b, 35131 Padova, Italy Email

Florian Dörfler Department of Information Technology and Electrical Engineering, ETH Zürich, 8092 Zürich, Switzerland Email

Keywords: Adaptive control, linear quadratic regulator, linear system, data-driven control

Abstract

Motivated by recent advances of reinforcement learning and direct data-driven control, we propose policy gradient adaptive control (PGAC) for the linear quadratic regulator (LQR), which uses online closed-loop data to improve the control policy while maintaining stability. Our method adaptively updates the policy in feedback by descending the gradient of the LQR cost and is categorized as indirect, when gradients are computed via an estimated model, versus direct, when gradients are derived from data using sample covariance parameterization. Beyond the vanilla gradient, we also showcase the merits of the natural gradient and Gauss-Newton methods for the policy update. Notably, natural gradient descent bridges the indirect and direct PGAC, and the Gauss-Newton method of the indirect PGAC leads to an adaptive version of the celebrated Hewer’s algorithm. To account for the uncertainty from noise, we propose a regularization method for both indirect and direct PGAC. For all the considered PGAC approaches, we show closed-loop stability and convergence of the policy to the optimal LQR gain. Simulations validate our theoretical findings and demonstrate the robustness and computational efficiency of PGAC.

This work was supported by ETH Zurich and the SNF through the NCCR Automation.

1 Introduction

The history of adaptive control is almost as long as the entire control field [1], and it is currently revived in the context of reinforcement learning (RL) [2]. A fundamental principle of an adaptive control system is its ability to monitor its own performance and adjust its parameters in the direction of better performance [3]. Adaptive control for linear time-invariant systems with unknown parameters is widely studied, and the manifold approaches can be divided with three orthogonal classifications. A commonly adopted one is indirect (when a dynamical model is identified followed by model-based control), versus direct (when bypassing identification) [4]. Based on control objectives, approaches are categorized as seeking stability (i.e., convergence of signals) or optimality (i.e., convergence of a performance index). Another perspective considers the policy update rule: one-shot-based methods solve an online optimization problem to obtain the policy, whereas gradient-based methods update the policy iteratively using online gradient information.

Classical adaptive control methods mainly seek robust stability, including indirect and direct model-reference adaptive control (MRAC), self-tuning regulators, and model-free adaptive control [4, 5]. In particular, MRAC designs the control input based on Lyapunov theory to guarantee stability. In comparison, other approaches focus on optimality, aligning more closely with the Zames’s definition of adaptive control [6], i.e., improve over the best performance with prior information. A widely adopted optimality criterion is the infinite-horizon linear quadratic regulator (LQR) cost, which quantifies the \( \mathcal{H}_2\) norm of the closed-loop system [7, 8, 9, 10, 11, 12, 13, 14]. A representative instance is adaptive dynamical programming (ADP), a classical approach of reinforcement learning (RL) [7, 8]. While it shows convergence to the optimal LQR gain, the stability of the closed-loop system remains unclear. An exception is the identification-based policy iteration method [15], which shows stability under sufficiently small identification error. Recently, there have been adaptive control methods showing both stability and convergence of the policy, based on the indirect certainty-equivalence LQR [9, 10, 11, 12, 13, 14]. These optimality-seeking methods commonly require persistently exciting (PE) inputs, without which the bursting phenomenon may happen due to insufficient excitation [16].

A representative example of one-shot-based methods is indirect adaptive LQR, where the state-feedback policy is determined as the Riccati or semi-definite programming (SDP) solution of a certainty-equivalence LQR problem [9, 14, 10, 11, 12, 13]. However, one-shot-based methods are sensitive to uncertainty, as the policy may significantly vary with large noise. Gradient-based methods are widely adopted in classical adaptive control, which uses different objectives for online gradient computation [1]. For example, the well-known MIT-rule optimizes the squared one-step prediction error, and MRAC includes an additional objective that reflects dynamics of the system. Compared with one-shot-based methods, gradient-based methods are computationally more efficient, as only a single gradient must be computed instead of a complete optimization problem. Moreover, they are robust to noise due to smooth and incremental policy updates. However, the existing gradient-based approaches mainly focus on stability, and it remains unclear if the LQR objective can be adopted to ensure optimality in adaptive control [2]. A major challenge is the non-convexity of the LQR cost in the state-feedback gain, hindering proving convergence of gradient-based methods.

Theoretical studies of the policy gradient method, a foundational cornerstone of modern RL [17], have shed a light on the non-convex optimization landscape of the LQR [18, 19, 20, 17]. Specifically, the LQR cost is known to be gradient dominated in the state-feedback gain, leading to global linear convergence for policy gradient methods [18]. However, computing the policy gradient requires an exact dynamical model. While zeroth-order methods can be used for gradient estimates in the absence of the model, they are intrinsically unsuitable for adaptive control, as the gradient can be estimated only after observing multiple long trajectories [19, 21]. Even with an exact policy gradient, the sequential stability of the closed-loop system under switching policies remains unclear. In fact, proving the stability of control systems based on RL (e.g., ADP and policy gradient methods) is a difficult task [2], and the intersection of RL and adaptive control continues to be a compelling and largely unexplored direction [22].

Classes of policy gradient adaptive control.
Figure 1. Classes of policy gradient adaptive control.

Recently, there has been a growing interest in direct data-driven control inspired by subspace methods and behavioral system theory [23, 24]. With a data-based system parameterization, direct methods obtain the LQR gain directly from a single batch of PE data [25]. Regularization methods are introduced to promote certainty-equivalence or robustness [26, 27] for direct LQR design or to harness the uncertainty for the separation principle of predictive control [28]. Despite these advances, the adaptation of direct methods is acknowledged as a fundamental open problem in the comprehensive surveys [23, 2]. By proposing a sample covariance parameterization for the LQR and developing a projected gradient dominance property, our previous works [29, 21] attack the direct adaptive control problem with a gradient-based method called data-enabled policy optimization (DeePO), which has seen successful applications in power converter systems [30], aerospace control [31], and autonomous bicycle control [32]. However, the closed-loop stability of DeePO remains unclear.

In this article, we propose policy gradient adaptive control (PGAC) for the LQR, which is a unified gradient-based framework touching upon both indirect and direct approaches and seeking both stability and optimality. Starting from a batch of offline data and an initial stabilizing gain, it alternates between control, where the input signifies a state-feedback term plus probing noise ensuring persistency of excitation, and gradient-based policy update, where the policy gradient of the LQR cost is approximately computed with online closed-loop data. Our prior DeePO method [21] is a special instance: namely, direct PGAC, using the vanilla gradient of the certainty-equivalent covariance-parameterized LQR for policy update. For indirect PGAC, we use the certainty-equivalence LQR cost with ordinary least-squares identification to compute the policy gradient. Beyond the vanilla policy gradient, we develop natural gradient and Gauss-Newton methods for PGAC. Notably, natural gradient descent bridges indirect and direct PGAC, and the Gauss-Newton method yields an adaptive variant of the celebrated Hewer’s algorithm [33], an iterative method to solve the Riccati equation and a cornerstone of ADP. Our adaptive Hewer’s algorithm coincides with the online identification-based policy iteration algorithm [15]. Since the certainty-equivalence LQR formulation disregards uncertainty in the estimated model and may lead to an unstable closed-loop system [28, 34], we propose a variance-based regularization method for both indirect and direct PGAC to account for the uncertainty. The classification of our certified PGAC methods is shown in Fig. 1.

Motivated by Zames’s perspective that adaptation and learning involve the acquisition of information about the plant [6], we adopt the signal-to-noise ratio (SNR) of the collected data as a quantitative information metric, reflecting the uncertainty in the identified dynamics. This notion satisfies Zames’s first monotonicity principle [6], as the SNR typically increases monotonically over time. For all the aforementioned PGAC approaches, we show that the optimality gap of the LQR gain is upper bounded by two terms signifying an exponential decrease in the initial optimality gap plus a bias scaling linearly with inverse of the SNR. This aligns with Zames’s second monotonicity principle of adaptive control [6], i.e., the performance improves with increasing information. Our results capture the convergence rate of the policy as a function of the SNR, improving upon the rate established in DeePO [21]. For example, the optimality gap decreases with time \( t\) as \( \mathcal{O}(1/\sqrt{t})\) for \( \text{SNR}_t \sim \mathcal{O}(\sqrt{t})\) , which corresponds to the case of Gaussian noise and a constant excitation level.

Apart from non-asymptotic policy convergence certificates, we provide closed-loop stability guarantees of PGAC by leveraging a favorable feature of gradient-based methods, i.e., the update rate can be easily controlled via the stepsize. Specifically, we show that under a sufficiently small stepsize, the closed-loop system is sequentially stable [12] under switching polices, and the state is upper-bounded by two terms, i.e., an exponential decrease in the initial state plus an upper bound of probing and process noise. While the one-shot-based adaptive control methods also ensure stability, they require a dwell time to mitigate the burn-in effect of switching policies and a safety mechanism in case of divergence [13, 12, 10]. We believe that our analysis techniques provide a viable solution to proving closed-loop stability in RL algorithms [2].

We perform simulations on a benchmark problem in [35] to validate our theoretical findings and demonstrate favorable computational efficiency and robustness of PGAC.

The remainder of this article is organized as follows. Section 2 recapitulates data-driven formulations of the LQR. Section 3 formulates the adaptive control problem and proposes PGAC. Section 4 proposes two variants of gradient descent for PGAC. Section 5 develops a regularization method for PGAC. Section 6 uses simulations to validate our theoretical results. Concluding remarks are made in Section 7. All proofs are provided in the Appendix.

Notation. We let \( I_n\) denote the \( n\) -by-\( n\) identity matrix. We let \( \underline{\sigma}(\cdot)\) and \( \sigma_n(\cdot)\) denote the minimal and \( n\) -th large singular value of a matrix, respectively. We let \( \|\cdot\|\) denote the \( 2\) -norm of a vector or matrix, and \( \|\cdot\|_F\) the Frobenius norm. We let \( A^\dagger:=A^{\top}(AA^{\top})^{-1}\) denote the right inverse of a full row rank matrix \( A\in \mathbb{R}^{n\times m}\) . We let \( \mathcal{N}(A)\) denote the nullspace of \( A\) and \( \Pi_A := I_{m}-A^{\dagger}A\) the projection operator onto \( \mathcal{N}(A)\) .

2 Data-driven formulations of the LQR

This section recapitulates indirect certainty-equivalence LQR with least-squares identification [27] and direct certainty-equivalence LQR with covariance parameterization [21].

2.1 Model-based LQR

Consider a discrete-time linear system

\[ \begin{equation} \left\{\begin{aligned} x_{t+1} & =A x_t+B u_t+w_t \\ z_t & =\begin{bmatrix} Q^{1 / 2} & 0 \\ 0 & R^{1 / 2} \end{bmatrix} \begin{bmatrix} x_t \\ u_t \end{bmatrix} \end{aligned}\right., \end{equation} \]

(1)

where \( t\in \mathbb{N}\) , \( x_t\in\mathbb{R}^{n}\) is the state, \( u_t\in\mathbb{R}^{m}\) is the control input, \( w_t \in \mathbb{R}^n\) is the noise, and \( z_t\) is the performance signal of interest. We assume that \( (A,B)\) are controllable and the weighting matrices \( (Q, R)\) are positive definite.

The LQR problem is phrased as finding a state-feedback gain \( K\in \mathbb{R}^{m\times n}\) that minimizes the \( \mathcal{H}_2\) -norm of the transfer function \( \mathscr{T}(K):w \rightarrow z\) of the closed-loop system

\[ \begin{equation} \begin{bmatrix} x_{t+1} \\ z_t \end{bmatrix}=\begin{bmatrix}[c|c] A+BK & I_n \\ \hline \begin{bmatrix} Q^{1 / 2} \\ R^{1 / 2} K \end{bmatrix} & 0 \end{bmatrix}\begin{bmatrix} x_t \\ w_t \end{bmatrix}. \end{equation} \]

(2)

When \( A+BK\) is stable, it holds that [36]

\[ \begin{equation} \|\mathscr{T}(K)\|_2^2 = \text{Tr}((Q+K^{\top}RK)\Sigma)=:C(K), \end{equation} \]

(3)

where \( \Sigma\) is the closed-loop state covariance matrix obtained as the positive definite solution to the Lyapunov equation

\[ \begin{equation} \Sigma = I_n + \left(A+BK\right)\Sigma \left(A+BK\right)^{\top}. \end{equation} \]

(4)

We refer to \( C(K)\) as the LQR cost and to (3)-(4) as a policy parameterization of the LQR.

For known \( (A,B)\) , there are alternative formulations to find the optimal LQR gain \( K^*:=\arg\min_{K}C(K)\) of (3)-(4), e.g., via the celebrated Riccati equation [36]. We recall a well-known iterative algorithm to solve the Riccati equation, called Hewer’s algorithm [33] (also known as policy iteration in reinforcement learning (RL) [7]), which starts from an initial stabilizing policy \( K_0\) and alternates between

\[ \begin{align} &P_{i+1} = Q + K_i^{\top}RK_i +(A+ BK_i)^{\top}P_{i+1}(A + BK_i), \end{align} \]

(5.a)

\[ \begin{align} &K_{i+1} = (R + B^{\top}P_{i+1}B)^{-1}B^{\top}P_{i+1}A, \end{align} \]

(5.b)

where (5.a) is the policy evaluation, and (5.b) is the policy improvement. The Hewer’s algorithm has asymptotic convergence guarantees to the solution of the Riccati equation and the optimal policy \( K^*\) , which are fixed points of (5[33].

For unknown \( (A,B)\) , there is a plethora of data-driven methods to find \( K^*\) , some of which we recapitulate below.

2.2 Indirect certainty-equivalence LQR with ordinary least-square identification

The conventional approach to data-driven LQR design follows the certainty-equivalence principle: it first identifies nominal system matrices \( (A,B)\) from data, and then solves the LQR problem regarding the identified model as the ground-truth. The identification step is based on subspace relations among the state-space data. Consider a \( t\) -long time series of states, inputs, unknown noises, and successor states

\[ \begin{equation} \begin{aligned} X_{0} &:= \begin{bmatrix} x_0& x_1& \dots& x_{t-1} \end{bmatrix}\in \mathbb{R}^{n\times t},\\ U_{0} &:= \begin{bmatrix} u_0& u_1& \dots& u_{t-1} \end{bmatrix}\in \mathbb{R}^{m\times t}, \\ W_{0} &:= \begin{bmatrix} w_0& w_1& \dots& w_{t-1} \end{bmatrix}\in \mathbb{R}^{n\times t}, \\ X_{1} &:= \begin{bmatrix} x_1& x_2& \dots& x_t \end{bmatrix}\in \mathbb{R}^{n\times t}, \end{aligned} \end{equation} \]

(6)

which satisfy the system dynamics

\[ \begin{equation} X_1 = AX_0+ BU_0 + W_0. \end{equation} \]

(7)

We assume that the data is persistently exciting (PE) [37], i.e., the block matrix of input and state data

\[ D_0 := \begin{bmatrix} U_0 \\ X_0 \end{bmatrix}\in \mathbb{R}^{(m+n)\times t} \]

has full row rank

\[ \begin{equation} \text{rank}(D_0) = m+n, \end{equation} \]

(8)

which is necessary for data-driven LQR design [38].

Based on the subspace relations (7) and the rank condition (8), an estimated model \( (\hat{A},\hat{B})\) can be obtained as the unique solution to the ordinary least-squares problem

\[ \begin{equation} [\hat{B}, \hat{A}] = \underset{B, A}{\arg \min }\left\|X_1-[B,A] D_0\right\|_F = X_1D_0^{\dagger}. \end{equation} \]

(9)

Following the certainty-equivalence principle [26], the system \( (A,B)\) is replaced with its estimate \( (\hat{A},\hat{B})\) in (3)-(4), and the LQR problem can be reformulated as

\[ \begin{equation} \text{minimize} \lim_{K} ~ \text{Tr}\left((Q+K^{\top}RK)\Sigma\right), \end{equation} \]

(10)

where \( \Sigma\) is the positive definite solution to

\[ \begin{equation} \Sigma = I_n + \left(\hat{A} + \hat{B}K\right)\Sigma \left(\hat{A} + \hat{B}K\right)^{\top}. \end{equation} \]

(11)

The problem formulation (10)-(11) is termed indirect certainty-equivalence LQR design [27].

2.3 Direct certainty-equivalence LQR with covariance parameterization

Instead of estimating a dynamical model (9), the{ direct data-driven} LQR design aims to find \( K^*\) directly from a batch of PE data [25, 26, 21]. For this purpose, our previous work [21] proposes a policy parameterization based on the sample covariance of input-state data

\[ \begin{equation} \Phi : = \frac{1}{t}D_0D_0^{\top}= \begin{bmatrix} U_0D_0^{\top}/t \\ \hline X_0D_0^{\top}/t \end{bmatrix} =\begin{bmatrix} \overline{U}_0 \\ \hline \overline{X}_0 \end{bmatrix}. \end{equation} \]

(12)

Under the PE rank condition (8), the sample covariance \( \Phi\) is positive definite, and there exists a unique solution \( V\in \mathbb{R}^{(n+m)\times n}\) to

\[ \begin{equation} \begin{bmatrix} K \\ I_n \end{bmatrix}= \Phi V \end{equation} \]

(13)

for any given \( K\) . We refer to (13) as the covariance parameterization of the policy. Note that the dimension of the parameterized policy \( V\) does not scale with data length \( t\) .

For brevity, define the data matrices \( \overline{W}_0= W_0D_0^{\top}/t, \overline{X}_1= X_1D_0^{\top}/t, \) {whose dimensions do not depend on \( t\) .} Then, the closed-loop matrix can be written as

\[ \begin{equation} A+BK=[B,A]\begin{bmatrix} K \\ I_n \end{bmatrix}\overset{(13)}{=}[B,A]\Phi V\overset{(7)}{=}(\overline{X}_1 - \overline{W}_0)V. \end{equation} \]

(14)

Following the certainty-equivalence principle, we disregard the uncertainty \( \overline{W}_0\) in (14). Substituting \( A+BK\) with \( \overline{X}_1V\) in (3)-(4) and together with (13), the LQR problem becomes

\[ \begin{equation} \begin{aligned} & \text {minimize} \lim_{V}~J(V) :=\text{Tr}\left((Q+V^{\top}\overline{U}_0^{\top}R\overline{U}_0V)\Sigma\right)\\ &\text{subject to}~ ~\overline{X}_0V= I_n \end{aligned} \end{equation} \]

(15)

with the gain matrix \( K = \overline{U}_0V\) , where \( \Sigma\) is the positive definite solution to the Lyapunov equation

\[ \begin{equation} \Sigma = I_n + \overline{X}_1V\Sigma V^{\top}\overline{X}_1^{\top}. \end{equation} \]

(16)

The problem formulation (15)-(16) is termed direct certainty-equivalence LQR design, and its solution coincides with that of the indirect design (10)-(11) [21].

3 Policy gradient adaptive control

In this section, we formulate the adaptive control problem and propose policy gradient adaptive control with convergence and stability guarantees.

3.1 Problem statement

We focus on the following adaptive control problem.

Problem 1

design a gradient-based method leveraging online closed-loop data such that the control policy converges to the optimal LQR gain, while ensuring the stability of the closed-loop system.

Problem 1 concerns both stability and optimality, which is in line with the definition of adaptive control by Zames [6], i.e., improve over the best with prior information. Throughout the article, we make the following assumptions .

Assumption 1

The covariance of the process noise \( w_t\) is upper-bounded, i.e., \( \|W_{0,t}W_{0,t}^{\top}/t\| \leq \delta_t^2\) for some \( \delta_t \geq 0\) .

Assumption 2

The control input is persistently exciting such that \( \underline{\sigma}(\Phi_t) \geq \gamma_t^2 \) for some \( \gamma_t > 0\) .

Assumption 1 is standard in data-driven control [25, 26, 38]. Note that the noise does not necessarily follow any particular statistics and can even be adversarial and correlated. Assumption 2 is a quantitative condition of persistency of excitation and implies the rank condition (8). It can be satisfied by adding probing noise to the control input and holds under another PE definition [39, Definition 2], which can be verified by examining solely the inputs. Notice again that the PE assumption is universal in adaptive control [8, 7].

By Zames [6], adaptive control involves the acquisition of information about the plant. Define the signal-to-noise ratio (SNR) of data at time \( t\) as

\[ \text{SNR}_t := \frac{\gamma_t}{\delta_t}, \]

which describes the ratio between the useful and useless information \( D_{0,t}\) and \( W_{0,t}\) [26, 27]. We adopt the SNR as an information metric, as the model uncertainty scales inversely with \( \text{SNR}_t\) .

Lemma 1

Consider the identification problem (9) with data \( (X_{0,t}, U_{0,t}, X_{1,t})\) and denote the model estimate as \( (\hat{A}_t, \hat{B}_t)\) . Then, it holds that

\[ \left\|[\hat{B}_t, \hat{A}_t] - [B, A] \right\| \leq \frac{1}{ \text{SNR}_t}. \]

This information metric satisfies Zames’s first monotonicity principle for adaptation and learning [6], as \( \text{SNR}_t\) is usually a monotone increasing function of time.

Remark 1

We discuss the information \( \text{SNR}_t\) under different noise and excitation settings that are widely adopted in adaptive control [11, 21, 10]. For time-uniformly bounded noise \( \delta_t \sim \mathcal{O}(1)\) and constant excitation \( \gamma_t \sim \mathcal{O}(1)\) , we have constant \( \text{SNR}_t \sim \mathcal{O}(1)\) . For i.i.d. Gaussian noise, it holds that \( \delta_t \sim \mathcal{O}(1/\sqrt{t})\) with high probability [10]. In this case, we have \( \text{SNR}_t \sim \mathcal{O}(\sqrt{t})\) for constant excitation \( \gamma_t \sim \mathcal{O}(1)\) and \( \text{SNR}_t \sim \mathcal{O}(t^{\frac{1}{4}})\) for diminishing excitation \( r_t \sim \mathcal{O}(t^{-\frac{1}{4}})\) . height6pt width 6pt depth 0pt

3.2 The policy gradient adaptive control framework

To solve Problem 1, we propose a policy gradient adaptive control (PGAC) framework alternating between control and gradient-based policy update. Specifically, the control input is in the form of \( u_t = K_t x_t + e_t\) , where the state-feedback gain \( K_t\) is the parameterized policy at time \( t\) , and \( e_t\) is a probing noise ensuring the PE condition. We assume that the initial gain \( K_{t_0}\) is stabilizing, which is common in adaptive control [9, 10, 11, 12, 13, 14]. Define \( \mathcal{S}:=\{K\in \mathbb{R}^{m\times n}|\rho(A+BK)<1\}\) as the set of stabilizing gains.

Assumption 3

The initial gain is stabilizing, i.e., \( K_{t_0} \in \mathcal{S}\) .

We let Assumption 3 hold in the rest of the article. The policy \( K_t\) is updated over time with gradient methods of the LQR cost. When \( (A,B)\) are known, the policy iterates as

\[ \begin{equation} K_{t+1} = K_t - \eta \nabla C(K_t), \end{equation} \]

(17)

where \( \nabla C(K_t)\) is the exact policy gradient. Then, the closed-form expression of \( \nabla C(K)\) is given below.

Lemma 2 ({[18, Lemma 1]})

For any \( K\in\mathcal{S}\) , the gradient of \( C(K)\) is given by

\[ \begin{equation} \nabla C(K) = 2\left(\left(R+{B}^{\top} P {B}\right) K+{B}^{\top} P {A}\right) {\Sigma}, \end{equation} \]

(18)

where \( {\Sigma}\) satisfies (4), and \( P\) is the positive definite solution to the Lyapunov equation

\[ \begin{equation} P = Q + K^{\top}RK + ({A}+{B}K)^{\top}P ({A}+{B}K). \end{equation} \]

(19)

By Lemma 1, given knowledge of \( (A, B)\) , the policy gradient of the LQR cost can be computed by solving two Lyapunov equations in (4) and (19). While the cost \( C(K)\) is non-convex, it satisfies favorable properties such as gradient dominance and local smoothness, leading to global linear convergence of the gradient descent (17) [18, 40].

Lemma 3 (Gradient dominance)

For \( K \in \mathcal{S}\) , it holds that

\[ C(K)- C(K^*) \leq \mu \| \nabla C(K)\|_F^2, \]

where \( \mu := \|\Sigma^*\|/\underline{\sigma}(R) \) is the gradient dominance constant, and \( \Sigma^*\) is the closed-loop covariance matrix (4) with \( K^*\) .

Lemma 4 ({Local smoothness})

Define the sublevel set \( \mathcal{S}(a):=\{K\in \mathcal{S}|C(K)\leq a\}\) . For any \( K,K' \in \mathcal{S}(a)\) satisfying \( K+b(K'-K) \in \mathcal{S}(a), \forall b \in [0,1]\) , it holds that

\[ C(K') \leq C(K) + \langle \nabla C(K), K' - K \rangle + {l(a)}\|K'-K\|^2/2, \]

where the smoothness constant \( l(a)\) is a polynomial in \( a\) .

Even when all gains are stabilizing \( K_t \in \mathcal{S}, \forall t \geq t_0\) , due to switches of the policies, the stability of the closed-loop system is unclear. Moreover, since \( (A, B)\) are unknown in the adaptive control setting, the policy gradient can only be approximated using online closed-loop data, and the corresponding convergence remains unclear.

Depending on how the policy gradient is approximated, we categorize PGAC as indirect, when the gradient is computed with an estimated model (9), versus direct, when the gradient is directly computed from data with the covariance parameterization (13)-(15). In the sequel, we propose indirect and direct PGAC methods with convergence and stability guarantees.

3.3 Indirect policy gradient adaptive control


Algorithm 1 Indirect Policy Gradient Adaptive Control
1.Require: Offline data \( (X_{0,t_0}, U_{0,t_0}, X_{1,t_0})\) , an initial stabilizing policy \( K_{t_0}\) , and a stepsize \( \eta\) .
2.for \( t=t_0,t_0+1,\dots\) do
3.Apply \( u_t = K_tx_t + e_t\) and observe \( x_{t+1}\) .
4.Estimate the model via recursive least-squares (21).
5.Perform one-step policy gradient descent (20) where \( \nabla \hat{C}_{t+1}(K_t)\) is the policy gradient (18) with the estimated model \( (\hat{A}_{t+1},\hat{B}_{t+1})\) .
6.end for

\[ \begin{equation} K_{t+1} = K_t - \eta \nabla \hat{C}_{t+1}(K_t), \end{equation} \]

(20)

Indirect PGAC uses the least-squares estimates \( (\hat{A},\hat{B})\) in (9) to approximately compute the policy gradient. The details are presented in Algorithm 1. We require an initial stabilizing gain and a batch of offline PE data. In the online stage, the control input contains a probing noise \( e_t\) to ensure Assumption 2. After observing the new state, we use recursive least-squares to identify an updated model estimate. Let \( \phi_t = [u_t^{\top},x_t^{\top}]^{\top}\) . Then, the recursive least-squares iteration is

\[ \begin{equation} [\hat{B}_{t+1}, \hat{A}_{t+1}] = [\hat{B}_{t}, \hat{A}_{t}] + (x_{t+1} - [\hat{B}_{t}, \hat{A}_{t}]\phi_t ) \frac{\phi_t^{\top} \Phi_t^{-1} }{t+ \phi_t^{\top}\Phi_t^{-1}\phi_t}, \end{equation} \]

(21)

which is an efficient rank-one update. Then, we perform one-step policy gradient descent with stepsize \( \eta\) , where the gradient is computed with the estimated model.

Lemma 1 will play an important role in quantifying the effects of replacing the exact policy gradient with the approximated policy gradient in (20). Together with Lemmas 3 and 4, we can show the convergence of \( K_t\) in Algorithm 1.

Apart from the convergence, we need to show stability of the closed-loop system. Since \( K_t\) is updated over time, we resort to stability analysis of switching or time-varying systems [12].

Definition 1 (Strong stability)

A policy \( K\) is \( (\kappa, \alpha)\) -strongly stable if there exist constants \( \kappa \geq 1\) , \( 0< \alpha \leq 1\) , and matrices \( H \succ 0\) and \( L\) , such that \( A +BK = HLH^{-1}\) with \( \|L\|\leq 1-\alpha\) , \( \|H\|\|H^{-1}\| \leq \kappa\) , and \( \|K\|\leq \kappa\) .

It is known that a policy \( K \in \mathcal{S}\) if and only if \( K\) is \( (\kappa, \alpha)\) -strongly stable for some \( \kappa\) and \( \alpha\)  [12]. Definition 1 is quantitative and will lead to explicit bounds on the state. Since a sequence of policies \( \{K_t\}\) is applied to the system, we require a stronger notion of stability under switching policies [12].

Definition 2 (Sequential stability)

A sequence of policies \( \{K_t\}\) for the system (1) is sequentially stable if there exist constants \( \kappa \geq 1\) , \( 0< \alpha \leq 1\) , and matrices \( \{H_t\succ 0\}\) and \( \{L_t\}, \forall t\) , such that \( A+BK_t = H_tL_tH_t^{-1}\) , and

  1. \( \|L_t\|\leq 1 - \alpha\) and \( \|K_t\|\leq \kappa\) ;
  2. \( \|H_t\|\leq \kappa\) and \( \|H_t^{-1}\|\leq 1\) ;
  3. \( \|H_{t+1}^{-1}H_t\| \leq 1 + \alpha/2\) .

With sequential stability, we can show boundedness of the state [12]. By Definition 2, sequential stability holds if \( \{K_t\}\) is strongly stable uniformly in time, and the change in two consecutive policies is sufficiently small. In PGAC, this can be achieved by carefully controlling the stepsize \( \eta\) . In the following theorem, we show that Algorithm 1 achieves both stability and optimality.

Theorem 1

There exist constants \( \nu_i>0,i\in\{1,2,\dots,5\}\) with \( \nu_4 <1\) depending on \( (A,B,Q,R,K_{t_0})\) , such that, if \( \text{SNR}_t \geq \nu_1,\) for all \( t\) and \( \eta \leq \min\{\nu_2, 2\mu\}\) , then the sequence \( \{K_t\}\) is sequentially stable, and the state is bounded as

\[ \begin{equation} \|x_t\| \leq \nu_3\left(1-\frac{\nu_4}{2}\right)^{t-t_0}\|x_{t_0}\|+\frac{2 \nu_3}{\nu_4} \max_{t_0 \leq i<t}\|Be_i + w_i\|. \end{equation} \]

(22)

Moreover, the policy \( \{K_t\}\) converges in the sense that

\[ \begin{equation} \begin{aligned} C(K_{t}) - C^* &\leq \left(1-\frac{\eta}{2\mu}\right)^{t-t_0} (C(K_{t_0}) - C^*) \\ &+ \eta \nu_5\sum_{i = t_0}^{t}\left(1-\frac{\eta}{2\mu}\right)^{t-i} \frac{1}{\text{SNR}_i}. \end{aligned} \end{equation} \]

(23)

We make several remarks on Theorem 1. First, under a sufficiently large SNR and a small stepsize, the system is sequentially stable under switches of \( K_t\) . The state is explicitly bounded by two terms, i.e., an exponential decrease in the initial state plus an upper bound on the probing and process noise. In comparison, stability of the adaptive dynamical programming (ADP) approach [7, 8] has not been certified and is unclear. While the one-shot-based adaptive control methods ensure stability, they require a dwell time to mitigate the burn-in effect of switching policies and a safety mechanism in case of divergence [13, 12, 10].

Second, the optimality gap of the LQR gain is upper bounded by two terms signifying an exponential decrease in the initial optimality gap plus a bias scaling inversely with the SNR. This aligns with Zames’s second monotonicity principle [6], i.e., the performance monotonously increases with the information metric. Our results capture the convergence rate of the policy for different SNR functions in Remark 1. For example, the optimality gap decreases as \( \mathcal{O}(1/\sqrt{t})\) for \( \text{SNR}_t \sim \mathcal{O}(\sqrt{t})\) , which corresponds to the case of Gaussian noise and a constant excitation level, and \( \mathcal{O}(t^{-\frac{1}{4}})\) for the diminishing excitation case with \( \text{SNR}_t \sim \mathcal{O}(t^{\frac{1}{4}})\) . Notice that the corresponding known optimal convergence rates are \( \mathcal{O}({1}/{t})\) and \( \mathcal{O}({1}/\sqrt{t})\) , respectively, obtained by one-shot-based adaptive control [11]. The latter requires a more conservative SNR condition such that the perturbation of the Riccati equation has a quadratic local approximation [9]. Nevertheless, we observe that Algorithm 1 achieves the optimal convergence rate in simulations.

3.4 Direct policy gradient adaptive control

Instead of computing the policy gradient with an estimated model, direct PGAC updates the policy based on the sample covariance parameterization (13)-(15). The details are presented in Algorithm 2, which is also known as data-enabled policy optimization (DeePO) [21]. In the online stage, we use sample covariance to parameterize the policy (27) and perform one-step projected gradient descent on the parameterized policy (28). Define \( \mathcal{V}: = \{V\in \mathbb{R}^{(n+m)\times n} |\rho(\overline{X}_1V)<1 \}\) as the feasible set of the covariance-parameterized LQR (15). The gradient and projection expressions are as below [21].

Lemma 5

For \( V \in \mathcal{V}\) , the gradient of \( J(V)\) is

\[ \begin{equation} \nabla J(V) = 2 \left(\overline{U}_{0}^{\top}R\overline{U}_{0}+\overline{X}_1^{\top}P\overline{X}_{1}\right)V \Sigma, \end{equation} \]

(24)

where \( \Sigma\) is the solution to (16), and \( P\) is the positive definite solution to the Lyapunov equation

\[ \begin{equation} P = Q + V^{\top}\overline{U}_{0}^{\top}R\overline{U}_{0}V + V^{\top}\overline{X}_{1}^{\top}P\overline{X}_{1}V. \end{equation} \]

(25)

Further, we define the projection \( \Pi_{\overline{X}_{0}}: = I_{n+m}-\overline{X}_{0}^{\dagger}\overline{X}_{0}\) onto the constraint \( \overline{X}_0V= I_n\) in (15).

By Lemma 5, given data \( (X_0, U_0, X_1)\) , the policy gradient of the LQR cost can be computed by solving two Lyapunov equations in (16) and (25). The projection is adopted to ensure the subspace constraint in (15). The stepsize \( \eta_t\) is a variable to be specified later. As in indirect PGAC, Algorithm 1 also has an efficient recursive implementation [21].

As a novel result beyond [21], we show that the direct policy update (27)-(29) via projected gradient and the indirect counterpart (20) are equivalent up to a scaling matrix.

Lemma 6

The policy update (27)-(29) is equivalent to

\[ \begin{equation} K_{t+1} = K_t - \eta_t M_{t+1} \nabla \hat{C}_{t+1}(K_t), \end{equation} \]

(26)

where \( M_t:= \overline{U}_{0,t} \Pi_{\overline{X}_{0,t}} \overline{U}_{0,t}^{\top}\) is a positive definite matrix depending only on data. Moreover, the minimal singular value of \( M_t\) satisfies \( \underline{\sigma}(M_t) \geq {\gamma_t^4}\) .

Lemma 6 enables us to leverage the previous results of indirect PGAC for the analysis of Algorithm 2.


Algorithm 2 Direct Policy Gradient Adaptive Control
1.Require: Offline data \( (X_{0,t_0}, U_{0,t_0}, X_{1,t_0})\) , an initial stabilizing policy \( K_{t_0}\) , and stepsizes \( \eta_t\) .
2.for \( t=t_0,t_0+1,\dots\) do
3.Apply \( u_t = K_tx_t + e_t\) and observe \( x_{t+1}\) .
4.Given \( K_{t}\) , solve \( V_{t+1}\) via (27)
5.Perform one-step projected gradient descent (28) where the gradient \( \nabla J_{t+1}(V_{t+1})\) is given by Lemma 5.
6.Update the control gain by (29)
7.end for

\[ \begin{equation} V_{t+1} =\Phi_{t+1}^{-1} \begin{bmatrix} K_{t} \\ I_n \end{bmatrix}. \end{equation} \]

(27)

\[ \begin{equation} V_{t+1}' = V_{t+1} - \eta_{t+1} \Pi_{\overline{X}_{0,t+1}} \nabla J_{t+1}(V_{t+1}), \end{equation} \]

(28)

\[ \begin{equation} K_{t+1} = \overline{U}_{0,t+1}V_{t+1}'. \end{equation} \]

(29)

Theorem 2

There exist constants \( \nu_i>0,i\in\{1,2,\dots,6\}\) with \( \nu_4<1\) depending on \( (A,B,Q,R,K_{t_0})\) , such that, if \( \text{SNR}_t \geq \max\{\nu_1, \nu_2\|M_{t}\|/\underline{\sigma}(M_{t}) \} \) and \( \eta_t \leq \|M_{t}\|^{-1} \min \{\nu_5, 2\mu\}\) for all \( t\) , then \( \{K_t\}\) is sequentially stable, and (22) holds. Moreover, the policy converges as

\[ \begin{align*} C(K_{t}) - C^* &\leq \prod_{i=t_0+1}^{t}\left(1-\frac{\eta_i\underline{\sigma}(M_{i})}{2\mu}\right)(C(K_{t_0}) - C^*) \\ &+ \nu_6\sum_{i = t_0}^{t}\prod_{s=i}^{t-1}\left(1-\frac{\eta_s\underline{\sigma}(M_s)}{2\mu}\right) \frac{1}{\text{SNR}_i}. \end{align*} \]

Theorem 2 has three differences with Theorem 1 due to the relation between indirect and direct policy update in Lemma 6. First, the allowed SNR depends on the condition number of the scaling matrix \( M_t\) . Second, the stepsize \( \eta_t\) scales with the inverse of \( \|M_t\|\) . Third, the convergence rate of the policy depends on the minimal singular value of \( M_t\) , which by Lemma 6 scales with the excitation level \( \gamma_t\) . For the case of constant excitation, \( \underline{\sigma}(M_t)\) is lower bounded by a constant, and the convergence of the policy reduces to that of indirect PGAC (23). The diminishing excitation case does not lead to an explicit rate as \( \underline{\sigma}(M_t)\) is time-varying. Note that in adaptive control it is essential to maintain sufficient excitation to prevent the bursting phenomenon [16]. We refer to more comparisons between direct and indirect PGAC to simulations in Section 6.

We discuss the special case of uniformly bounded noise and constant excitation considered in our previous work [21]. In this case, the minimal singular value of \( M_t\) is lower bounded by a constant, and both \( \|M_t\|\) and the condition number of \( M_t\) converge asymptotically. Thus, the effect of \( M_t\) can be neglected, and we can take a constant stepsize \( \eta\) . Then, the condition in Theorem 2 reduces to that of [21, Theorem 2]. Theorem 2 reveals a linear convergence of the policy with respect to the initial optimality gap, which improves over the sublinear rate previously shown in [21, Theorem 2].

4 PGAC with the natural gradient and Gauss-Newton methods

In the previous section, we have developed indirect and direct PGAC methods, where the vanilla gradient descent is used to update the policy. In this section, we demonstrate the merits of two variants of gradient descent: natural gradient and Gauss-Newton methods, for the policy update of PGAC. In particular, the natural gradient descent bridges indirect and direct PGAC, and the Gauss-Newton method of the indirect PGAC leads to an adaptive version of Hewer’s algorithm (5).

4.1 Bridging indirect and direct PGAC via natural gradient

Let us vectorize \( K\) as \( \theta = \text{vec}(K^{\top})\) . Then, the natural policy gradient update of the LQR cost \( C(\theta)\) is given by

\[ \begin{equation} \theta' = \theta - \eta F_{\theta}^{-1}\nabla C(\theta), \end{equation} \]

(30)

where \( \nabla C(\theta)\) is the vanilla gradient, and \( F_{\theta}\succ 0\) is the Fisher information matrix (FIM) that captures the curvature of the parameter space of \( \theta\) , which biases the gradient descent in the direction of large uncertainties. The corresponding update for \( K\) takes the form [18]

\[ \begin{equation} \begin{aligned} K' &= K - \eta \nabla C(K)\Sigma^{-1} \\ & = K- 2\eta(\left(R+{B}^{\top} P {B}\right) K-{B}^{\top} P {A}), \end{aligned} \end{equation} \]

(31)

where \( {\Sigma}\) satisfies (4), \( P\) satisfies (19), and the last equality follows from Lemma 2. We briefly highlight the merits of natural gradient.

Remark 2

Compared with the vanilla gradient descent, the natural gradient descent (31) is computationally more efficient, as it can be computed with only one Lyapunov equation (19) instead of two for the vanilla gradient in Lemma 2. Moreover, the stepsize for natural gradient (31) can be chosen more aggressively than that of the vanilla gradient, leading to an improved convergence rate [18]. height6pt width 6pt depth 0pt

Motivated by (31), for indirect PGAC with natural gradient, we substitute the policy update (20) in Algorithm 1 with

\[ \begin{equation} K_{t+1} = K_t - 2\eta((R+\hat{B}_{t+1}^{\top} \hat{P}_{t+1} \hat{B}_{t+1}) K_t-\hat{B}_{t+1}^{\top} \hat{P}_{t+1} \hat{A}_{t+1}), \end{equation} \]

(32)

where \( \hat{P}_{t+1}\) is the positive definite solution to Lyapunov equation (19) with the estimated model \( (\hat{A}_{t+1}, \hat{B}_{t+1})\) .

For direct PGAC with natural gradient, we notice that the covariance parameterization (13) defines a smooth bijection between the spaces of \( K\) and \( V\) . By leveraging the fact that FIM captures the geometry of the paramterization space, we show that indirect and direct PGAC are bridged by adopting the natural gradient descent for the policy update.

Lemma 7

For Algorithm 2, the policy update (27)-(29) with (28) replaced by natural gradient descent is equivalent to (32).

We provide convergence and stability guarantees for PGAC with natural gradient.

Theorem 3

With different values of \( \nu_i, i \in \{1,2,\dots,5\}\) , the results in Theorem 1 also apply to PGAC with the natural gradient descent (32).

The theoretical guarantees in Theorem 3 are similar to those of PGAC with the vanilla gradient. Nevertheless, we note that performing natural gradient descent is computationally more efficient and leads to faster convergence rate; see Remark 2.

4.2 Indirect PGAC with Gauss-Newton method and its equivalence to an adaptive Hewer’s algorithm


Algorithm 3 Adaptive Hewer’s Algorithm
1.Require: Offline data \( (X_{0,t_0}, U_{0,t_0}, X_{1,t_0})\) and an initial stabilizing policy \( K_{t_0}\) .
2.for \( t=t_0,t_0+1,\dots\) do
3.Apply \( u_t = K_tx_t + e_t\) and observe \( x_{t+1}\) .
4.Estimate the model via recursive least-squares (21).
5.Policy evaluation \( \hat{P}_{t+1} = Q + K_t^{\TO}RK_t +(\hat{A}_{t+1} + \hat{B}_{t+1}K_t)^{\TO}\hat{P}_{t+1}(\hat{A}_{t+1} + \hat{B}_{t+1}K_t). \)
6.Policy improvement \[ K_{t+1} = (R + \hat{B}_{t+1}\hat{P}_{t+1}\hat{B}_{t+1})^{-1}\hat{B}_{t+1}\hat{P}_{t+1}\hat{A}_{t+1}. \]
7.end for

The Gauss-Newton method is a quasi-Newton method under a particular Riemannian metric [40]. It iterates as

\[ \begin{equation} \begin{aligned} K' &= K - \eta \left(R+B^{\top}PB\right)^{-1}\nabla C(K)\Sigma^{-1}\\ &= K - 2\eta\left(R+B^{\top}PB\right)^{-1} ((R+{B}^{\top} P {B}) K-{B}^{\top} P {A} ) \end{aligned} \end{equation} \]

(33)

with the constant stepsize \( 0<\eta <1\) . The stepsize of the Gauss-Newton method can be chosen more aggressively than the vanilla and natural gradient, leading to a faster convergence rate. With a special choice of stepsize \( \eta = 1/2\) , the Gauss-Newton update (33) is equivalent to

\[ K' = \left(R+B^{\top}PB\right)^{-1}B^{\top}PA. \]

This coincides with the Hewer’s algorithm (5) and enjoys local quadratic convergence. However, the Gauss-Newton update is computationally less efficient than natural gradient, as computing (33) involves the inverse of a matrix.

Inspired by (33), for indirect PGAC we substitute the policy update (20) in Algorithm 1 with

\[ \begin{equation} \begin{aligned} K_{t+1} &= K_t - 2\eta(R+\hat{B}_{t+1}^{\top}\hat{P}_{t+1}\hat{B}_{t+1})^{-1} \\ &\times((R+\hat{B}_{t+1}^{\top} \hat{P}_{t+1} \hat{B}_{t+1}) K_t-\hat{B}_{t+1}^{\top} \hat{P}_{t+1} \hat{A}_{t+1}). \end{aligned} \end{equation} \]

(34)

Our theoretical guarantees of indirect PGAC with the Gauss-Newton update (34) are as below.

Theorem 4

With different values of \( \nu_i, i \in \{1,2,\dots,5\}\) and \( \eta < 2\|\Sigma^*\|\) , the results in Theorem 1 apply to indirect PGAC with the Gauss-Newton policy update (34), with the convergence certificate

\[ \begin{equation} \begin{aligned} C(K_{t}) - C^* &\leq \left(1-\frac{\eta}{2\|\Sigma^*\|}\right)^{t-t_0} (C(K_{t_0}) - C^*) \\ &+ \eta \nu_5\sum_{i = t_0}^{t}\left(1-\frac{\eta}{2\|\Sigma^*\|}\right)^{t-i} \frac{1}{\text{SNR}_i}. \end{aligned} \end{equation} \]

(35)

Since the stepsize can be chosen aggressively, it usually leads to faster convergence than PGAC with the vanilla and natural gradient. With the stepsize \( \eta = 1/2\) , our indirect PGAC with Gauss-Newton update is equivalent to an adaptive version of the Hewer’s algorithm (also referred to as online identification-based policy iteration in [15]); see Algorithm 3. However, Algorithm 3 generally does not have sequential stability guarantees, as the choice of \( \eta = 1/2\) may not ensure a sufficiently small policy change. An exception is when the initial policy is sufficiently close to the optimal LQR gain.

Theorem 5

There exist constants \( \nu_i>0,i\in\{1,2,\dots,5\}\) with \( \nu_4<1\) depending on \( (A,B,Q,R,K_0)\) , such that, if \( \text{SNR}_t \geq \nu_1, \forall t\) and \( C(K_{t_0})-C^*\leq \nu_2\) , then \( \{K_t\}\) is sequentially stable, and (22) holds. Moreover, the policy \( \{K_t\}\) converges in the sense that

\[ \begin{equation} \begin{aligned} C(K_{t+1}) - C^* &\leq \frac{1}{\nu_2} (C(K_{t}) - C^*)^2 + \frac{ \nu_5}{\text{SNR}_{t+1}}. \end{aligned} \end{equation} \]

(36)

In Theorem 5, the condition on the initial policy \( K_{t_0}\) can be replaced with a condition on the SNR, given that \( K_{t_0}\) is obtained as the solution of the indirect certainty-equivalence LQR (10) with offline data. This aligns with the convergence and stability results in [15]that hold under sufficiently small identification error. The local quadratic convergence in (36) inherits from properties of the quasi-Newton method [40]. Note that the dependence of the convergence rate on the SNR is the same as that of Theorem 1.

5 Boosting the performance of PGAC via regularization

So far, we have developed PGAC with variants of gradient descent of the certainty-equivalence LQR formulations (10) and (15). However, the certainty-equivalence approach neglects the effect of noise in the data, leading to uncertainties in the closed-loop covariance matrix and the cost function. This section recalls a regularization method for PGAC to compensate the uncertainty [34]. As a main contribution, we show that the theoretical guarantees of PGAC are preserved under proper choices of the regularization coefficient.

5.1 Indirect PGAC with regularization

For the indirect certainty-equivalence LQR (10)-(11), the closed-loop covariance matrix \( \Sigma\) satisfies the Lyapunov equation (11). However, given \( (A, B)\) , the Lyapunov equation that should be met is (4). For brevity, define

\[ \Xi := \begin{bmatrix} K \\ I_n \end{bmatrix}\Sigma \begin{bmatrix} K \\ I_n \end{bmatrix}^{\top}. \]

Then, the difference between the right-hand sides of (11) and (4) is

\[ \begin{align*} \overline{W}_0 \Phi^{-1} \Xi [B~ A]^{\top} + [B~ A] \Xi \Phi^{-1} \overline{W}_0^{\top} + \overline{W}_0 \Phi^{-1} \Xi \Phi^{-1} \overline{W}_0^{\top}. \end{align*} \]

For well-behaved noise statistics, \( \|\overline{W}_0\|\) usually vanishes quickly with time [34, Lemma 1], and the first two terms dominate the difference. To reduce the difference, we introduce the regularizer \( \text{Tr}(\Phi^{-1}\Xi)\) to the certainty-equivalence cost (10), leading to the regularized cost

\[ \begin{equation} \hat{C}(K;\lambda) := \text{Tr}\left( \left(\begin{bmatrix} R & 0 \\ 0 & Q \end{bmatrix} + \lambda\Phi^{-1} \right) \begin{bmatrix} K \\ I_n \end{bmatrix} \Sigma \begin{bmatrix} K \\ I_n \end{bmatrix}^{\top} \right), \end{equation} \]

(37)

where \( \Sigma\) satisfies (11), and \( \lambda>0\) is the regularization coefficient. The regularizer is a correction to the penalty matrices of the LQR problem. In line with [28], the regularization also compensates the uncertainty in the cost function and promotes exploitation [34]. We demonstrate these merits via simulation in Section 6.3.

Next, we derive the gradient expression of the regularized cost (37). Consider the following partition of \( \Phi^{-1}\)

\[ \Phi^{-1} = \begin{bmatrix} \left(\Phi^{-1}\right)_{uu} & \left(\Phi^{-1}\right)_{ux} \\ \left(\Phi^{-1}\right)_{xu} & \left(\Phi^{-1}\right)_{xx} \end{bmatrix}, \]

where \( (\Phi^{-1})_{uu}\in \mathbb{R}^{m\times m}\) and \( (\Phi^{-1})_{xx}\in \mathbb{R}^{n\times n}\) . Define \( Q_{\lambda} = Q + \lambda (\Phi^{-1})_{xx}\) and \( R_{\lambda} = R + \lambda (\Phi^{-1})_{uu}\) .

Lemma 8

For \( K\in \{K|\rho(\hat{A}+\hat{B}K)<1\}\) , the gradient of \( \hat{C}(K;\lambda)\) is given by

\[ \nabla \hat{C}(K;\lambda) = 2\left(R_{\lambda}K + \hat{B}^{\top}P(\hat{A}+\hat{B}K) +\lambda \left(\Phi^{-1}\right)_{ux} \right)\Sigma, \]

where \( \Sigma\) and \( P\) are the solutions to (11) and \( P = Q_{\lambda} + K^{\top}R_{\lambda}K + \lambda K^{\top}(\Phi^{-1})_{ux} + \lambda(\Phi^{-1})_{xu}K + (\hat{A}+\hat{B}K)^{\top}P (\hat{A}+\hat{B}K) \) , respectively.

The proof of Lemma 8 follows the same vein as that of [18, Lemma 1] and is omitted due to space limitation.

To apply regularization to the indirect PGAC, we replace the gradient in Algorithm 1 with that of the regularized cost in Lemma 8. To ensure convergence of \( {K_t}\) to \( K^*\) , the effect of the regularizer must vanish over time. Since \( \text{Tr}(\Phi_t^{-1}\Xi)\) scales with \( \|\Phi_t^{-1}\| \leq \mathcal{O}(1/\gamma_t^2)\) , setting \( \lambda_t \leq \mathcal{O}(\delta_t\gamma_t)\) ensures that the regularization decays as \( \mathcal{O}(1/\text{SNR}_t)\) . We show this decay rate is sufficient for convergence and stability of the regularized indirect PGAC.

Theorem 6

Consider the regularized indirect PGAC algorithm, where we replace the gradient \( \nabla\hat{C}_{t+1}(K_t)\) in (20) with \( \nabla\hat{C}_{t+1}(K_t;\lambda_{t+1})\) . With different values of \( \nu_i, i \in \{1,2,\dots,5\}\) and the condition \( \lambda_t \leq \nu_6 \delta_t\gamma_t\) for \( \nu_6>0\) , the results in Theorem 1 apply, i.e., the boundedness of the state (22) and the convergence certificate of \( C(K_t)\) (23) hold.

Under the condition \( \lambda_t \leq \mathcal{O}(\delta_t\gamma_t)\) , the theoretical guarantees can be extended to PGAC with natural gradient and Gauss-Newton method, whose formal proofs are omitted.

5.2 Direct PGAC with regularization

Leveraging the covariance parameterization in (13), the regularizer in (37) can be reformulated as \( \text{Tr}(V\Sigma V^{\top}\Phi)\) . Then, the direct LQR cost with regularization is

\[ \begin{equation} J(V;\lambda):= J(V) + \lambda\text{Tr}(V\Sigma V^{\top}\Phi). \end{equation} \]

(38)

We derive the gradient expression of \( J(V;\lambda)\) , the proof of which follows the same vein as that of [21, Lemma 2] and is omitted due to space limitation.

Lemma 9

For \( V\in \mathcal{V}\) , the gradient of \( J(V;\lambda)\) is

\[ \nabla J(V;\lambda) = 2 \left(\lambda \Phi + \overline{U}_0^{\top}R\overline{U}_0+\overline{X}_1^{\top}P\overline{X}_1\right)V \Sigma, \]

where \( \Sigma\) and \( P\) are the solution to (16) and \( P = Q + V^{\top}(\lambda \Phi + \overline{U}_0^{\top}R\overline{U}_0)V + V^{\top}\overline{X}_1^{\top}P\overline{X}_1V \) , respectively.

As in regularized indirect PGAC, the regularization coefficient should scale as \( \lambda_t \leq \mathcal{O}(\delta_t\gamma_t)\) . Then, by leveraging the relation between indirect and direct PGAC in Lemma 6, we have the convergence and stability results.

Theorem 7

Consider the regularized direct PGAC algorithm, where we replace the gradient \( \nabla J_{t+1}(V_{t+1})\) in (28) with \( \nabla J_{t+1}(V_{t+1};\lambda_{t+1})\) in Lemma 9. With different values of \( \nu_i, i \in \{1,2,\dots,6\}\) and the condition \( \lambda_t \leq \nu_7 \delta_t\gamma_t\) for \( \nu_7>0\) , the boundedness of the state and the convergence certificate of \( C(K_t)\) in Theorem 2 apply.

We discuss the selection of the regularization coefficient \( \lambda_t = \lambda_0 \delta_t\gamma_t\) under special cases of Remark 1. For the bounded noise case, a constant \( \lambda_t\) is sufficient. For the Gaussian noise case, we let \( \lambda_t \sim \mathcal{O}(1/\sqrt{t})\) for constant excitation and \( \lambda_t \sim \mathcal{O}(t^{-\frac{3}{4}})\) for diminishing excitation. Our results align with [41], which shows that a constant \( \lambda_t\) over time is not favorable and may lead to a trivial solution in direct parameterization [25]. Note that we still need to tune the coefficient \( \lambda_0\) for optimal performance and stability.

6 Numerical case studies

This section uses simulations to demonstrate the effectiveness of PGAC in terms of convergence, stability, and computational efficiency. Our simulations are based on the benchmark LQR problem with the model [35, Section 6]

\[ \begin{equation} \begin{aligned} &A = \begin{bmatrix} 1.01 & 0.01 & 0\\ 0.01 & 1.01 & 0.01\\ 0 & 0.01 & 1.01 \end{bmatrix}, ~B = I_3, \end{aligned} \end{equation} \]

(39)

which corresponds to a discrete-time marginally unstable Laplacian system, and \( Q= I_3\) and \( R = 10^{-3} \times I_3\) . Note that the direct PGAC method (also known as DeePO [21]) has been validated in nonlinear power systems [30] and real-world experiments of an autonomous bicycle [32]. Our code is run in Matlab and provided in https://github.com/Feiran-Zhao-eth/policy-gradient-adaptive-control.

6.1 Convergence and closed-loop stability of PGAC

Convergence of one-shot-based and PGAC methods.
Figure 2. Convergence of one-shot-based and PGAC methods.

We compare the convergence and stability of the one-shot-based adaptive control, direct and indirect PGAC with the vanilla gradient, natural gradient, and Gauss-Newton methods. We set the number of offline data to \( t_0=20\) and let \( x_0 = 0\) , \( u_t\sim \mathcal{N}(0,I_3)\) for \( t<t_0\) . For \( t \geq t_0\) , we set \( u_t =K_tx_t + e_t\) with \( e_t \sim \mathcal{N}(0,I_3)\) to ensure persistency of excitation. The noise is drawn from \( w_t\sim \mathcal{N}(0, I_3)\) , which corresponds to a poor \( \text{SNR}_t \in [-5,0]\) dB. We set the stepsize to \( \eta = 0.02\) for indirect PGAC with the vanilla gradient, \( \eta = 0.2\) for natural gradient, \( \eta = 0.5\) for the Gauss-Newton method (Algorithm 3), and \( \eta_t = 0.2/\|M_t\|\) for direct PGAC. The one-shot-based method obtains \( K_t\) from the Riccati equation with an estimated model (21) every time step. For all the methods, we set \( K_{t_0}\) as the certainty-equivalence LQR (10) with offline data.

Fig. 2 shows the optimality gap of the policy sequence for the one-shot-based method and direct and indirect PGAC with the vanilla gradient under the same randomness realization. For clarity of presentation, we omit the curves of indirect PGAC with natural gradient and Gauss-Newton, which lie between those of the one-shot-based method and indirect PGAC with the vanilla gradient. We make several key observations. First, all the methods improve the performance over the initial policy using online closed-loop data, and the optimality gap converges asymptotically at the rate \( \mathcal{O}(1/t)\) , which implies that our theoretically certified rate \( \mathcal{O}(1/\sqrt{t})\) is conservative. Second, the one-shot-based method diverges at the early stage due to unexpected large noise. In contrast, PGAC methods exhibit more stable convergence. This is expected in presence of noise, as the gradient is more robust than the minimizer.

Finite-horizon cost of one-shot-based and PGAC methods.
Figure 3. Finite-horizon cost of one-shot-based and PGAC methods.

Fig. 3 shows their average finite-horizon cost \( \sum_{i=0}^{t-1} \|z_i\|^2/t\) as a function of time, where \( z_t\) is the performance output in (2). We observe that all methods exhibit highly similar average costs, which converge rapidly to a positive constant. This indicates that the closed-loop systems resulting from both PGAC and the one-shot-based method are stable. Furthermore, the convergence behavior aligns with our theoretical results, i.e., the state is bounded by the sum of an exponentially decaying term and a steady-state bias determined by the probing and process noise. We note that the divergence at the early stage is due to abrupt large process noise.

Table 1. Comparison of the computation time for running \( 500\) time steps.
directindirectnaturalGNone-shot-based
time (s)0.0630.0570.0470.0490.418
Table 2. The impact of regularization for PGAC.
\( \lambda_t = 0\)\( \lambda_t = 1/(10\sqrt{t-t_0})\)one-shot-based
direct\( \mathcal{P} = 81\%\)
\( \mathcal{M} = 0.0014\)
\( \mathcal{P} = 98\%\)
\( \mathcal{M} = 0.0011\)
\( \mathcal{P} = 82\%\)
\( \mathcal{M} = 0.0013\)
indirect\( \mathcal{P} = 83\%\)
\( \mathcal{M} = 0.0014\)
\( \mathcal{P} = 99\%\)
\( \mathcal{M} = 0.0014\)
\( \mathcal{P} = 82\%\)
\( \mathcal{M} = 0.0013\)

6.2 Computational efficiency of PGAC

We compare their efficiency in terms of the computation time. For indirect PGAC, we apply recursive least squares (21) for identification. For direct PGAC, we apply the rank-one update in [21, Section V-B]. For the one-shot-based method, the certainty-equivalence LQR problem (10) is solved using the dlqr function in MATLAB. We perform \( 20\) independent trials and record the mean of the computation time for running \( t=500\) time steps. The results are summarized in Table I. We observe that compared with the one-shot-based method, the PGAC approaches require significantly less computational time. This gap is due to that PGAC performs only one or two Lyapunov equation solves per iteration for gradient descent, whereas the one-shot-based method involves solving a Riccati equation at each step. Among all the PGAC approaches, direct PGAC consumes highest computation time as the dimension of the optimization matrix \( V\in \mathbb{R}^{(n+m)\times n}\) is higher than that of \( K\in \mathbb{R}^{m\times n}\) in the indirect case. For indirect PGAC, the vanilla gradient method is slowest as it requires solving two Lyapunov equations per time step instead of one for natural gradient and Gauss-Newton methods. It is worth noting that while we demonstrate the results using a simple third-order system (39), the computational gap between PGAC and the one-shot-based method grows substantially with increasing system dimension; we refer the reader to our previous work [21, Section V] for more comprehensive simulation results.

6.3 The impact of regularization

We study the impact of regularization for both indirect and direct PGAC with the vanilla gradient. We set the stepsize to \( \eta = 0.2\) for indirect PGAC and \( \eta_t = 0.2/\|M_t\|\) for direct PGAC. Consider \( 100\) independent trials. We denote by \( \mathcal{P}\) the percentage the algorithm converges without any instability issue and by \( \mathcal{M}\) the median of the optimality gap \( {(C(K_T) - C^*)}/{C^*}\) with \( T=1000\) through all trails where convergence is observed. The results are reported in Table II, where the one-shot-based method without regularization is for comparison. With regularization, both indirect and direct PGAC significantly outperform the one-shot-based method in terms of algorithm stability, i.e., the robustness of policy updates against noise. Furthermore, direct PGAC achieves a smaller optimality gap compared to the one-shot-based method. This is because our regularizer well compensates the uncertainty induced by noise in both the closed-loop covariance matrix and the cost function [34].

7 Concluding remarks

In this article, we proposed policy gradient adaptive control (PGAC) for the LQR, which touches upon both indirect and direct approaches and enables the use of variant gradient methods and regularization. For all approaches, we showed convergence of the policy and stability of the closed-loop system. Simulations validated the theoretical results and illustrated the robustness and computational efficiency of PGAC.

We believe that this article leads to fruitful future works. It would be interesting to see if the probing signal can be designed using the sample covariance to efficiently reduce the uncertainty. As observed in the simulation, our theoretical convergence rate may be conservative, and a sharper analysis might be possible. Our sequential stability analysis can be used for other RL-based adaptive control methods, e.g., adaptive dynamical programming, where the closed-loop stability is largely open [1]. Our PGAC framework can be extended to other settings (e.g., output feedback control), performance indices (e.g., \( \mathcal{H}_{\infty}\) norm), noise assumptions (e.g., stochastic noise), and system classes (e.g., time-varying systems).

Appendix

A Proofs in Section 3

A.1 Proof of Lemma 1

It follows from Assumptions 1 and 2 that \( \|[\hat{B}_t, \hat{A}_t] - [B, A]\| = \|W_{0,t} D_{0,t}^{\dagger}\| \leq \|W_{0,t}\|\|D_{0,t}^{\dagger}\| \leq {\delta_t}/{ \gamma_t}. \)

A.2 Proof of Theorem 1

We first provide some useful bounds.

Lemma 10

For \( K \in \mathcal{S}\) , it holds (\romannumeral1) \( \|\Sigma\| \leq {C(K)}/\underline{{\sigma}(Q)}, \) (\romannumeral2) \( \|P\| \leq C(K), \) and (\romannumeral3) \( \|K\|_F \leq ({C(K)}/\underline{{\sigma}(R)})^{{1}/{2}}. \)

Proof

The proof of (\romannumeral1) and (\romannumeral2) follows directly from the definition of \( C(K)\) . To show (\romannumeral3), we have \( C(K) = \text{Tr}((Q+K^{\top}RK)\Sigma) \geq \text{Tr}(K^{\top}RK\Sigma) \geq \text{Tr}(K^{\top}K) \underline{\sigma}(R) = \|K\|_F^2 \underline{\sigma}(R).\)

The proof frequently invokes the following perturbation analysis result for Lyapunov equations [21, Lemma 15].

Lemma 11

Let \( A\in \mathbb{R}^{n \times n}\) be stable and \( \Sigma(A)\) be the unique positive definite solution to \( \Sigma(A) = I_n + A\Sigma(A) A^{\top}\) . If \( \|A'-A\| \leq 1/({4\|\Sigma(A)\|(1+\|A\|)}), \) then \( A'\) is stable and \( \|\Sigma(A')-\Sigma(A)\| \leq 4 \|\Sigma(A)\|^2(1+\|A\|)\|A'-A\| . \)

Consider the policy gradient update with estimated \( (\hat{A}, \hat{B})\)

\[ \begin{equation} K' = K - \eta \nabla \hat{C}(K), \end{equation} \]

(40)

and the update with the ground-truth \( (A,B)\)

\[ \begin{equation} K'' = K - \eta \nabla {C}(K). \end{equation} \]

(41)

The convergence of the update (41) can be shown by leveraging Lemmas 3 and 4. If \( \eta \leq 1/l\) , then

\[ \begin{equation} C(K'') - C(K) \leq -\frac{\eta}{2\mu} (C(K) - C^*), \end{equation} \]

(42)

where \( l\) is the smoothness parameter and \( \mu\) is the gradient dominance constant. To show the convergence of (40), we first quantify the distance between the exact gradient and the approximated gradient. Let \( p_1\) be a scalar function

\[ p_1 = \frac{8C^3(K)}{\underline{\sigma}^2(Q)}\left(1+\frac{C(K)}{\underline{\sigma}(Q)}\right) \left(1+\sqrt{\frac{C(K)}{\underline{\sigma}(R)}}\right), \]

and \( p_2 = {C^2(K) }/(\underline{{\sigma}(Q)p_1})\) . For the ease of notation, we omit the subscript in \( \gamma_t\) and \( \delta_t\) when it is clear from the context.

Lemma 12

Let \( K \in \mathcal{S}\) . Then, there exists a polynomial \( p_3 = \text{poly}(C(K)/\underline{\sigma}(Q), \|A\|, \|B\|, \|R\|, 1/\underline{\sigma}(R))\) such that, if \( \delta / \gamma \leq p_2\) , then \( \| \nabla{C}(K) - \nabla\hat{C}(K) \| \leq {p_3\delta}/{\gamma}.\)

Proof

For brevity, let

\[ \begin{equation} E = RK + B^{\top}P(A+BK). \end{equation} \]

(43)

We use \( \hat{E}\) , \( \hat{\Sigma}\) , and \( \hat{P}\) to denote the quantities under the estimated model \( (\hat{A}, \hat{B})\) . Then, the gradient can be expressed as \( \nabla C(K) = 2E\Sigma\) , and the difference in gradients is

\[ \begin{equation} \nabla{C}(K) - \nabla\hat{C}(K) = + 2E(\hat{\Sigma} - \Sigma) + 2(\hat{E} - E) \hat{\Sigma}. \end{equation} \]

(44)

We quantify the first term of (44). By Lemma 1, it holds

\[ \begin{equation} \|(A+BK) - (\hat{A} + \hat{B}K)\|\leq (1+\|K\|){\delta}/{\gamma}. \end{equation} \]

(45)

Then, we apply Lemma 11 to show \( \|\hat{\Sigma} - \Sigma\| \leq 4 \|\Sigma\|^2(1+\|A+BK\|)\|A+BK - (\hat{A} + \hat{B}K)\| \leq {p_1\delta}/{(2\gamma C(K))}. \) Together with the bound \( \operatorname{Tr}\left(E^{\top} E\right) \leq {\left\|R+B^{\top} P B\right\|\left(C(K)-C^*\right)}\) [18, Lemma 11], the first term of (44) is bounded as \( \|2E(\hat{\Sigma} - \Sigma)\|\leq {\left\|R+B^{\top} P B\right\|^{\frac{1}{2}} \left(C(K)-C^*\right)^{\frac{1}{2}}}p_1\delta/(2\gamma C(K)). \)

To bound the second term of (44), we note that \( E - \hat{E} = B^{\top}P((A+BK) - (\hat{A} + \hat{B}K)) + (B^{\top}P - \hat{B}^{\top}\hat{P})(\hat{A} + \hat{B}K). \) The first term of the right-hand side of this equation can be bounded by using (45) and Lemma 10. For the second term, we have \( \|\hat{A} + \hat{B}K\| \leq \|A+BK\| + (1+\|K\|){p_2}\) and \( \|\hat{\Sigma}\|\leq \|\Sigma\| + {p_1p_2}/{(2C(K))}\) . Notice that \( B^{\top}P - \hat{B}^{\top}\hat{P} = \hat{B}^{\top}(P-\hat{P}) + (B-\hat{B})^{\top}{P} \) . By the proof of [21, Lemma 8] and our condition \( \delta/\gamma \leq p_2\) , it holds that \( \|P-\hat{P}\| \leq p_1\delta/\gamma\) . Since \( \hat{B} \leq \|B\| + \delta/(\gamma)\leq \|B\| + p_2\) , we have \( \|B^{\top}P - \hat{B}^{\top}\hat{P}\|\leq (\|B\| + {p_2}){p_1\delta}/{\gamma} + {\|P\|\delta}/{\gamma}. \) Further, \( \|E-\hat{E}\| \leq \|B\|\|P\|(1+\|K\|)\delta/\gamma+ ((\|B\| + {p_2})p_1 + {\|P\|} ) (\|A+BK\| + (1+\|K\|){p_2} ){\delta}/{\gamma}.\)

Together with Lemma 10, the proof is completed.

Next, we quantify how the difference in the policy gradient in Lemma 12 affects the difference in the cost \( |C(K'')-C(K')|\) . We leverage the following preliminary lemma from [18], which follows directly from Lemma 11.

Lemma 13

Let \( K \in \mathcal{S}\) . There exist polynomials \( p_4 = \text{poly}(C(K)/\underline{\sigma}(Q), \|A\|^{-1}, \|B\|^{-1}, \|R\|^{-1}, \underline{\sigma}(R) )\) and \( p_5,p_6 = \text{poly}(C(K)/\underline{\sigma}(Q), \|A\|, \|B\|, \|R\|, 1/\underline{\sigma}(R))\) such that, if \( \|\widetilde{K} - K\| \leq p_4\) , then \( \widetilde{K} \in \mathcal{S}\) and

\[ \|\widetilde{\Sigma} - \Sigma\| \leq p_5\|\widetilde{K} - K\|, ~~ |C(\widetilde{K}) - C(K)| \leq p_6\|\widetilde{K} - K\|. \]

Now, we bound the difference in the cost functions.

Lemma 14

Let \( K \in \mathcal{S}\) . There exists a polynomial \( p_7\) in \( (\underline{{\sigma}(Q)}/{C(K)}, \|A\|^{-1}, \|B\|^{-1}, \|R\|^{-1}, \underline{\sigma}(R))\) such that, if

\[ \frac{\delta}{\gamma} \leq p_2 ~~\text{and}~~ \eta \leq \min\left\{\frac{p_4\gamma}{p_3\delta}, p_7\right\}, \]

then it holds that \( \left| C(K'') - C(K')\right| \leq \eta p_3p_6 \delta/{\gamma}. \)

Proof

By our hypothesis and Lemma 12, it holds that

\[ \|K' - K''\| = \eta \| \nabla{C}(K) - \nabla\hat{C}(K)\|\leq \frac{\eta p_3 \delta}{\gamma} \leq p_4. \]

Let \( p_7\) be the polynomial of the stepsize in the gradient descent case of [18, Theorem 7]. Then, it follows from [18, Theorem 7] that the update (41) returns a stabilizing policy \( K''\) . Furthermore, we can apply Lemma 13 to obtain that \( |C(K') - C(K'')| \leq p_6 \|K' - K''\| \leq {\eta p_3p_6 \delta}/{\gamma}, \) which completes the proof.

With Lemma 14 and (42), we show the convergence of (40)

\[ \begin{equation} C(K') - C(K) \leq -\frac{\eta}{2\mu} (C(K) - C^*) + \frac{\eta p_3p_6 \delta}{\gamma}. \end{equation} \]

(46)

To show the convergence of Algorithm 1, we need to first prove that \( C(K_t)\) is uniformly upper-bounded, such that the polynomials \( p_i, i\in\{1,2,\dots, 7\}\) have uniformly bounds. Let

\[ \begin{equation} \overline{C} = C^* + C(K_{t_0}) + 1 + \frac{1}{2\underline{l}\mu}, \end{equation} \]

(47)

where \( \underline{l} := l(C^*)\) , \( \mu = {\|\Sigma^*\|}/\underline{{\sigma}(R)}\) . Since the quantities \( l\) and \( p_i\) are function of \( C(K)\) , let \( \bar{l}, \bar{p}_1, \underline{p}_2, \bar{p}_3, \bar{p}_5, \bar{p}_6, \underline{p}_7\) be the associated quantities at \( \overline{C}\) , and \( \underline{p}_4\) be the quantity at \( C^*\) .

Lemma 15 (Boundedness of the cost)

If

\[ \begin{equation} \frac{\delta_t}{\gamma_t} \leq \min\left\{\underline{p}_2, \frac{1}{2\bar{p}_6 \bar{p}_3\mu}\right\} ~ \text{and}~ \eta \leq \min\left\{\frac{\underline{p}_4\gamma_t}{\bar{p}_3\delta_t}, \underline{p}_7, \frac{1}{\bar{l}} \right\}, \end{equation} \]

(48)

then \( C(K_t)\) of Algorithm 1 has a uniform upper bound, i.e., \( C(K_t) \leq \overline{C}\) with \( \overline{C}\) defined in (47).

Proof

The proof is based on mathematical induction. Clearly, the bound holds at \( t = t_0\) , i.e., \( C(K_{t_0}) \leq \overline{C}\) . Suppose that \( C(K_t) \leq \overline{C}\) for \( t>t_0\) . Next, we show \( C(K_{t+1}) \leq \overline{C}\) .

By Lemma 14, (42), and our hypothesis \( C(K_t) \leq \overline{C}\) , the gradient descent (20) yields

\[ \begin{align*} C(K_{t+1}) - C(K_t) &\leq -\frac{\eta}{2\mu} (C(K_t) - C^*) + \eta \bar{p}_6\bar{p}_3 \frac{\delta_{t+1}}{\gamma_{t+1}} \\ &\leq -\frac{\eta}{2\mu} (C(K_t) - C^*) + \frac{\eta}{2\mu}, \end{align*} \]

where the last inequality follows from our condition on \( \delta_t/\gamma_t\) .

Consider two cases. If \( C(K_t) \geq C^* +1\) , then

\[ C(K_{t+1}) \leq C(K_t) - \frac{\eta}{2\mu} + \frac{\eta}{2\mu} = C(K_t) \leq \overline{C}. \]

Otherwise, if \( C(K_t) < C^* +1\) , then

\[ C(K_{t+1}) \leq C^* + 1 + \frac{\eta}{2\mu} \leq C^* + 1 + \frac{1}{2\bar{l}\mu}\leq \overline{C}. \]

The proof is completed.

Then, under the condition (48), it follows that the convergence certificate (23) in Theorem 1 holds with \( \nu_5 = \bar{p}_3\bar{p}_6\) .

Next, we show the sequential stability of the closed-loop system of Algorithm 1. To this end, we first find proper system-theoretic matrices for \( (\kappa, \alpha)\) -strong stability.

Lemma 16

The policy \( K \in \mathcal{S}\) is \( (\kappa, \alpha)\) -strongly stable with

\[ \kappa = \sqrt{\frac{C(K)}{\min\{\underline{\sigma}(R), \underline{\sigma}(Q)\}}},~\alpha = 1 - \sqrt{1-\frac{1}{\kappa^2}}. \]

Proof

Let \( H = \Sigma^{1/2}\) and \( L = \Sigma^{-1/2} (A+BK) \Sigma^{1/2}\) . Then, the closed-loop matrix satisfies \( A+BK = HLH^{-1}\) .

By the definition of \( \Sigma\) , it follows that \( I = \Sigma^{-1} + LL^{\top}. \) Since \( C(K) \geq \underline{\sigma}(Q) \text{Tr}(\Sigma)\) , the following bounds hold

\[ \|L\| \leq \sqrt{1 - \frac{\underline{\sigma}(Q)}{C(K)}}, ~\|\Sigma^{\frac{1}{2}}\|\|\Sigma^{-\frac{1}{2}}\| \leq \sqrt{\frac{C(K)}{\underline{\sigma}(Q)}}. \]

Noting \( \|K\|\leq \sqrt{C(K)/\underline{\sigma}(R)}\) , the proof is completed.

We prove that the policy sequence of Algorithm 1 is sequentially strong stable.

Lemma 17

There exist \( \bar{p}_{8}\) as a function of \( \overline{C}\) such that, if

\[ \begin{equation} \begin{aligned} &\frac{\delta_t}{\gamma_t} \leq \min\left\{\underline{p}_2, \frac{1}{2\bar{p}_6 \bar{p}_3\mu}\right\} \\ &\eta \leq \min\left\{\frac{\underline{p}_4\gamma_t}{\bar{p}_3\delta_t}, \underline{p}_7, \frac{1}{\bar{l}}, \frac{\underline{p}_4}{\bar{p}_8}, \frac{\underline{\alpha}}{4\bar{\kappa} ^2 \bar{p}_5\bar{p}_8}, \frac{1}{2\bar{p}_5\bar{p}_8} \right\}, \end{aligned} \end{equation} \]

(49)

then \( \{K_t\}\) of Algorithm 1 is sequentially strong stable with parameters \( (\bar{\kappa}, \underline{\alpha})\) , where \( \bar{\kappa}, \underline{\alpha}\) are the quantities of Lemma 16 evaluated at \( \overline{C}\) .

Proof

By Lemma 15, the cost is uniformly bounded, i.e., \( C(K_t)\leq \overline{C}\) for all \( t\geq t_0\) . Then, with the parameters \( \bar{\kappa}, \underline{\alpha}\) , the first two conditions (i) and (ii) in Definition 2 are satisfied, i.e., the policy \( K_t\) is \( (\bar{\kappa}, \underline{\alpha})\) -strongly stable. Further, it suffices to show (iii) \( \|H_{t+1}^{-1}H_t\| \leq 1 + \alpha/2\) , or equivalently, \( \|\Sigma_{t+1}^{-1}\Sigma_t\| \leq (1 + {\alpha}/{2})^2,\) where \( \Sigma_t\) is the solution to (4) with \( K_t\) .

By the perturbation theory for matrix inverse [18, Theorem 35], if \( \|\Sigma_{t+1} - \Sigma_t\|<1/2\) , then \( \|\Sigma_{t+1}^{-1} - \Sigma_t^{-1}\| \leq {2\|\Sigma_{t+1} - \Sigma_t\|}/\underline{{\sigma}(\Sigma_t)} \leq 2\|\Sigma_{t+1} - \Sigma_t\|. \) Further,

\[ \begin{align*} \|\Sigma_{t+1}^{-1}\Sigma_t\| & = \|(\Sigma_{t+1}^{-1} - \Sigma_t^{-1})\Sigma_t + I_n \|\\ &\leq 1 + 2\|\Sigma_{t+1} - \Sigma_t\|\|\Sigma_t\| \\ &\leq 1 + 2 \|\Sigma_{t+1} - \Sigma_t\| {C(K_t)}/{\underline{\sigma}(Q)} \\ &\leq 1 + 2\bar{\kappa}^2 \|\Sigma_{t+1} - \Sigma_t\|. \end{align*} \]

Thus, we require \( \|\Sigma_{t+1} - \Sigma_t\|\) to be sufficiently small. By Lemma 3, if \( \|K_{t+1} - K_t\| \leq \underline{p}_4\) , then \( \|\Sigma_{t+1} - \Sigma_t\| \leq \bar{p}_5 \|K_{t+1} - K_t\|\) . Thus, we need \( \|K_{t+1} - K_t\| = \eta \|\nabla \hat{C}(K_{t+1})\|\) to be small. To this end, we provide a bound for \( \|\nabla \hat{C}(K_{t+1})\|\) . By Lemma 12, we have \( \| \nabla\hat{C}(K) \| \leq \|\nabla{C}(K) \| + {\bar{p}_3\delta}/{\gamma} \leq \|\nabla{C}(K) \| + \bar{p}_3\underline{p}_2. \) Since \( C(K)\leq \overline{C}\) , the right-hand side has a uniform upper bounded denoted by \( \bar{p}_8\) . Thus, as long as \( \eta \leq \underline{p}_4/\bar{p}_8\) , we have \( \|K_{t+1} - K_t\| \leq \underline{p}_4\) . To ensure \( \|\Sigma_{t+1} - \Sigma_t\|<1/2\) , we need \( \eta \leq 1/(2\bar{p}_5\bar{p}_8)\) . Furthermore, it follows that \( \|\Sigma_{t+1}^{-1}\Sigma_t\| \leq 1 + 2\bar{\kappa}^2 \|\Sigma_{t+1} - \Sigma_t\| \leq 1 + 2\bar{\kappa}^2\bar{p}_5\bar{p}_8\eta\) . Since we also require \( 2\kappa^2p_5p_8\eta \leq \alpha/2\) , we let \( \eta \leq \alpha/(4\kappa^2p_5p_8)\) . Combining all the bounds on \( \eta\) completes the proof.

With sequential strong stability, we establish the boundedness of the state under Algorithm 1. The proof follows the argument in [12, Lemma 3] and is included here for completeness.

Lemma 18

Under the condition (49), it holds that

\[ \begin{equation} \|x_t\| \leq \bar{\kappa}(1-\underline{\alpha}/2)^{t-t_0}\|x_{t_0}\|+\frac{2 \bar{\kappa}}{\underline{\alpha}} \max_{t_0\leq i<t}\|Be_i + w_i\|. \end{equation} \]

(50)

Proof

Following the control policy \( u_t = K_t x_t + e_t\) , the state sequence satisfies \( x_{t+1} = (A+BK_t)x_t + Be_t + w_t\) , and

\[ \begin{equation} x_t = F_{t_0}x_{t_0} + \sum_{i=t_0}^{t-1} F_{i+1}(Be_i + w_i), \end{equation} \]

(51)

where \( F_i = \prod_{s = i}^{t-1}\left(A +B K_s\right), t_0\leq i \leq t-1, \text{and}  F_t = I_n. \)

By Lemma 17, the policy sequence \( \{K_t\}\) is \( (\bar{\kappa}, \underline{\alpha})\) -strongly stable. That is, there exist matrices \( \{H_t\succ 0\}\) and \( \{L_t\}\) such that \( A+BK_t = H_tL_tH_t^{-1}\) for all \( t\geq t_0\) , and (i) \( \|L_t\|\leq 1 - \underline{\alpha}, \|K_t\|\leq \bar{\kappa}\) ; (ii) \( \|H_t\|\leq \bar{\kappa}\) , \( \|H_t^{-1}\|\leq 1 \) ; (iii) \( \|H_{t+1}^{-1}H_t\| \leq 1 + \underline{\alpha}/2. \) Thus, for \( t_0\leq i <t\) , it holds that

\[ \begin{align*} \left\|F_i\right\| & =\left\|\prod_{s=i}^{t-1} H_s L_s^{\top} H_s^{-1}\right\| \\ & \leq\left\|H_{t-1}\right\|\left(\prod_{s=i}^{t-1}\left\|L_s\right\|\right)\left(\prod_{s=i}^{t-2}\left\|H_{s+1}^{-1} H_s\right\|\right)\left\|H_i^{-1}\right\| \\ & \leq \bar{\kappa}(1-\underline{\alpha})^{t-i}(1+\underline{\alpha}/2)^{t-i-1} \leq \bar{\kappa}(1-\underline{\alpha}/2)^{t-i}. \end{align*} \]

Inserting it into (51), we obtain that

\[ \begin{align*} &\|x_t\| \leq\|F_{t_0}\|\|x_{t_0}\|+\sum_{i={t_0}}^{t-1}\|F_{i+1}\|\|Be_i + w_i\| \\ & \leq \bar{\kappa}(1-\frac{\underline{\alpha}}{2})^{t-t_0}\|x_{t_0}\|+ \bar{\kappa} \sum_{i=t_0}^{t-1}(1-\frac{\underline{\alpha}}{2})^{t-i-1}\|Be_i + w_i\| \\ & \leq \bar{\kappa}(1-\frac{\underline{\alpha}}{2})^{t-{t_0}}\|x_{t_0}\|+ \bar{\kappa} \max _{{t_0} \leq i<t}\|Be_i + w_i\| \sum_{t=1}^{\infty}(1-\frac{\underline{\alpha}}{2})^t \\ & = \bar{\kappa}(1-\frac{\underline{\alpha}}{2})^{t-{t_0}}\|x_{t_0}\|+\frac{2 \bar{\kappa}}{\underline{\alpha}} \max_{{t_0} \leq i<t}\|Be_i + w_i\|. \end{align*} \]

The proof is completed.

Finally, we notice that the upper bound of \( \eta\) in (49) depends on \( \gamma_t/\delta_t\) . By using the bound \( \gamma_t/\delta_t \geq 1/\underline{p}_2\) , it follows that \( \eta\) has a constant upper bound, which completes the proof.

A.3 Proof of Lemma 6

By the covariance parameterization (13) and the chain rule, the gradient satisfies \( \nabla J(V) = \overline{U}_{0}^{\top}\nabla \hat{C}(K)\) . Substituting it into (28) and using the parameterization (13) yield (26).

To show that \( \underline{\sigma}(M_t) \geq \gamma_t^4\) , note that

\[ \Phi_t \Pi_{\overline{X}_{1,t}} \Phi_t^{\top} = \begin{bmatrix} M_t & 0_{m\times n} \\ 0_{n\times m} & 0_{n\times n} \end{bmatrix}. \]

Then, it holds that \( \underline{\sigma}(M_t) = \sigma_m (\Phi_t \Pi_{\overline{X}_{0,t}} \Phi_t^{\top})\) . By the inequalities of singular value of matrix products, it follows that \( \underline{\sigma}(M_t) \geq \underline{\sigma}(\Phi_t)\sigma_m(\Pi_{\overline{X}_{0,t}}\Phi_t^{\top}) \geq \underline{\sigma}(\Phi_t) \sigma_m(\Pi_{\overline{X}_{0,t}}) \underline{\sigma}(\Phi_t) = (\underline{\sigma}(\Phi_t))^2\geq \gamma_t^4. \)

A.4 Proof of Theorem 2

We omit the subscript of \( M_t\) when it is clear from the context. By Lemma 6, it suffices to consider the policy gradient update with estimated \( (\hat{A}, \hat{B})\)

\[ K' = K - \eta M \nabla\hat{C}(K) \]

and the gradient update with the ground-truth \( (A,B)\)

\[ \begin{equation} K'' = K - \eta M \nabla{C}(K). \end{equation} \]

(52)

We have the following convergence result for (52).

Lemma 19

For \( K\in \mathcal{S}\) and a stepsize \( \eta \leq 1/(l\|M\|)\) , the policy gradient update (52) satisfies

\[ C(K'') - C^* \leq \left( 1 - \frac{\eta \underline{\sigma}(M)}{2\mu} \right)(C(K) - C^*). \]

Proof

We start from a change of variables. Noting that \( M\) is positive definite, let \( L = M^{-\frac{1}{2}}K\) and \( \tilde{C}(L) := C(M^{\frac{1}{2}}L)\) . Then, we have \( \nabla \tilde{C}(L)=M^{\frac{1}{2}} \nabla C(K)\) , and the update on \( K\) in (52) is equivalent to the following update \( L'' = L - \eta M^{\frac{1}{2}} \nabla C(K) = L - \eta \nabla \tilde{C}(L).\)

We show that the cost \( \widetilde{C}(L)\) is gradient dominated and smooth in \( L\) . We have

\[ \begin{align*} & \left\|\nabla \tilde{C}(L)\right\|^2=\left\|M^{\frac{1}{2}} \nabla C(K)\right\|^2 \geq \underline{\sigma}(M)\left\|\nabla C(K)\right\|^2 \\ &\geq \frac{\underline\sigma(M)}{\mu}\left(C(K)-C^*\right) = \frac{\underline{\sigma}(M)}{\mu}\left(\tilde{C}(L)-C^*\right), \end{align*} \]

where the second inequality follows from Lemma 3. Thus, the gradient dominance property holds with constant \( \underline{\sigma}(M)/\mu\) .

Moreover, for any stabilizing \( L_1,L_2\) satisfying \( L+\psi(L'-L), \forall \psi \in [0,1]\) is stabilizing, it holds that

\[ \begin{align*} & \left\|\nabla \tilde{C}(L_1)-\nabla \tilde{C}\left(L_2\right)\right\|=\left\|M^{\frac{1}{2}} \nabla C(K_1)-M^{\frac{1}{2}} \nabla C\left(K_2\right)\right\| \\ & \leq l\|M\|^{\frac{1}{2}}\left\|K_1-K_2\right\| \leq l\|M\|\left\|L_1-L_2\right\|, \end{align*} \]

that is, \( \widetilde{C}(L)\) is smooth with smoothness constant \( l\|M\|\) . Noting \( \widetilde{C}(L) = C(K)\) completes the proof.

Lemma 20

Let \( K \in \mathcal{S}\) . If

\[ \frac{\delta}{\gamma} \leq p_2 ~~\text{and}~~ \eta \leq \frac{1}{\|M\|}\cdot \min\left\{\frac{p_4\gamma}{p_3\delta}, {p_7}\right\}, \]

then it holds that \( \left| C(K'') - C(K')\right| \leq {\eta p_6p_3 \delta \|M\|}/{\gamma}. \)

Proof

By our hypothesis and Lemma 12, it holds that

\[ \|K'' - K'\| \leq \eta \|M\| \| \nabla\hat{C}(K) - \nabla C(K)\|\leq \frac{\eta p_3 \delta\|M\|}{\gamma} \leq p_4. \]

Following the same vein of the proof of Lemma 14, we can show that \( K''\) is stabilizing, and \( |C(K') - C(K'')| \leq p_6 \|K' - K''\| \leq {\eta p_6p_3 \delta \|M\|}/{\gamma}, \) which completes the proof.

Next, we show that the cost of Algorithm 2 has an upper bound, i.e., \( C(K_t) \leq \overline{C}\) , where \( \overline{C}\) is given by (47).

Lemma 21 (Boundedness of the cost)

If

\[ \begin{equation} \begin{aligned} &\frac{\delta_t}{\gamma_t} \leq \min\left\{\underline{p}_2, \frac{\underline{\sigma}(M_{t})}{2\bar{p}_6 \bar{p}_3\mu\|M_{t}\|}\right\} \\ &\eta_t \leq \frac{1}{\|M_{t}\|}\cdot \min\left\{\frac{\underline{p}_4\gamma_t}{\bar{p}_3\delta_t}, \underline{p}_7, \frac{1}{\bar{l}} \right\}, \end{aligned} \end{equation} \]

(53)

then \( C(K_t)\) has a uniform upper bound, i.e., \( C(K_t) \leq \overline{C}\) .

Proof

The proof is based on mathematical induction. Clearly, the bound holds at \( t = t_0\) , i.e., \( C(K_{t_0}) \leq \overline{C}\) . Suppose that \( C(K_{t}) \leq \overline{C}\) for \( t > t_0\) . Next, we show \( C(K_{t+1}) \leq \overline{C}\) .

By Lemmas 6, 19, 20 and our hypothesis on \( \delta/\gamma\) , we have

\[ \begin{align*} &C(K_{t+1}) - C(K_t) \\ &\leq -\frac{\eta_{t+1}\underline{\sigma}(M_{t+1})}{2\mu} (C(K_t) - C^*) + \frac{\eta_{t+1}\bar{p}_6\bar{p}_3 \delta_{t+1}\|M_{t+1}\|}{\gamma_{t+1}} \\ &\leq -\frac{\eta_{t+1}\underline{\sigma}(M_{t+1})}{2\mu} (C(K_t) - C^*) + \frac{\eta_{t+1}\underline{\sigma}(M_{t+1})}{2\mu}. \end{align*} \]

Consider two cases. If \( C(K_t) \geq C^* + 1\) , then

\[ C(K_{t+1}) \leq C(K_t) - \frac{\eta_{t+1}\underline{\sigma}(M_{t+1})}{2\mu} + \frac{\eta_{t+1}\underline{\sigma}(M_{t+1})}{2\mu} \leq \overline{C}. \]

Otherwise, if \( C(K_t) < C^* + 1\) , then

\[ C(K_{t+1}) \leq C(K_t) + \frac{\eta_{t+1}\underline{\sigma}(M_{t+1})}{2\mu} \leq C^* + 1+ \frac{\underline{\sigma}(M_{t+1})}{2\bar{l}\mu\|M_{t+1}\|}. \]

Noting that \( \underline{\sigma}(M_{t+1}) \leq \|M_{t+1}\|\) , it further leads to \( C(K_{t+1})\leq \overline{C}\) . By induction, the proof is completed.

Furthermore, we show the convergence of Algorithm 2. Under the condition (53), we have

\[ \begin{align*} &C(K_t) - C^* \\ &\leq (1-\frac{\eta_t\underline{\sigma}(M_{t})}{2\mu}) (C(K_{t-1}) - C^*) + \frac{\eta_t \bar{p}_6\bar{p}_3\delta_t \|M_{t}\|}{\gamma_{t}}\\ &\leq (1-\frac{\eta_t\underline{\sigma}(M_{t})}{2\mu}) (C(K_{t-1}) - C^*) + \frac{ \bar{p}_6\bar{p}_3\delta_t }{\bar{l} \gamma_t}, \end{align*} \]

where the last inequality follows from \( \eta_t \leq 1/(\bar{l}\|M_{t}\|)\) . Then, the convergence certificate in Theorem 2 holds with \( \nu_6 = {\bar{p}_6\bar{p}_3}/{\bar{l}}\) . Finally, we prove that the policy sequence \( \{K_t\}\) of Algorithm 2 is sequential strong stable.

Lemma 22

Suppose that

\[ \begin{align*} &\frac{\delta_t}{\gamma_t} \leq \min\left\{\underline{p}_2, \frac{\underline{\sigma}(M_{t})}{2\bar{p}_6 \bar{p}_3\mu\|M_{t}\|}\right\} ~\text{and}\\ &\eta_t \leq \frac{1}{\|M_{t}\|} \cdot \min\left\{\frac{\underline{p}_4\gamma_t}{\bar{p}_3\delta_t}, \underline{p}_7, \frac{1}{\bar{l}}, \frac{\underline{p}_4}{\bar{p}_8}, \frac{\underline{\alpha}}{4\bar{\kappa} ^2 \bar{p}_5\bar{p}_8}, \frac{1}{2\bar{p}_5\bar{p}_8} \right\}. \end{align*} \]

then \( \{K_t\}\) of Algorithm 2 is sequentially strong stable with parameters \( (\bar{\kappa}, \underline{\alpha})\) . Moreover, the state is bounded as (50).

Proof

The proof follows the same vein of that of Lemmas 17 and 18, and is omitted due to space limitation.

Noting that the condition on stepsize in (17) can be further simplified by using \( \gamma_t/\delta_t \geq 1/\underline{p}_2\) , the proof is completed.

B Proofs in Section 4

B.1 Proof of Lemma 7

First, we derive the natural policy gradient descent over \( V\) . Let \( \psi = \text{vec}(V^{\top})\) . Then, the covariance parameterization between \( K\) and \( V\) (13) can be expressed in terms of \( \theta\) and \( \psi\)

\[ \begin{equation} \theta = \text{vec}\left(V^{\top} \overline{U}_0^{\top}\right) = \left(\overline{U}_0 \otimes I_n\right)\text{vec}(V^{\top}) = \left(\overline{U}_0\otimes I_n\right)\psi \end{equation} \]

(54)

with a subspace constraint \( (\overline{X}_0 \otimes I_n)\psi = \text{vec}(I_n). \) Thus, the FIM with respect to \( \psi\) follows from the transformation rule

\[ \begin{equation} F_\psi = \frac{\partial \theta^{\top}}{\partial \psi} F_\theta \frac{\partial \theta}{\partial \psi} = \left(\overline{U}_0\otimes I_n\right)^{\top} F_{\theta} \left(\overline{U}_0 \otimes I_n\right). \end{equation} \]

(55)

Since \( \psi\) lies in an affine space, we should restrict the Fisher inverse to its tangent space \( \mathcal{N}(\overline{X}_0\otimes I_n)\) . Denote \( O\) as a unit orthogonal basis of \( \mathcal{N}(\overline{X}_0)\) . Then, the corresponding orthogonal basis of \( \mathcal{N}(\overline{X}_0\otimes I_n)\) is \( (O\otimes I_n)\) , and the Fisher inverse along the tangent space is given by \( ((O\otimes I_n)^{\top}F_\psi (O\otimes I_n))^{-1}.\) Finally, the natural gradient method updates \( \psi\) in the feasible affine space as \( \psi^+ = \psi - \eta \Delta \psi\) , where \( \Delta \psi= (O\otimes I_n) ((O\otimes I_n)^{\top}F_\psi (O\otimes I_n))^{-1} (O\otimes I_n)^{\top} \nabla J(\psi). \)

Next, we show that the change in \( \psi\) leads to the change of \( \theta\) in (30). By the parameterization (54), \( \theta^+ =(\overline{U}_0\otimes I_n)\psi^+ = \theta - \eta (\overline{U}_0\otimes I_n) \Delta \psi\) . By the chain rule, it holds that \( \nabla J(\psi) = (\overline{U}_0\otimes I_n)^{\top} \nabla C(\theta)\) . Note that by Lemma 6, the matrix \( (\overline{U}_0\otimes I_n)(O\otimes I_n) = (\overline{U}_0O)\otimes I_n \) is invertible. Then, combining (55) and the expression of \( \Delta \psi\) , it follows that the update of \( \theta\) is \( \theta^+ = \theta - \eta F_{\theta}^{-1}\nabla C(\theta)\) , which is exactly (30).

Finally, noting that the above derivations hold under the estimated model \( (\hat{A}_{t+1}, \hat{B}_{t+1})\) , the proof is completed.

B.2 Proof of Theorem 3

The proof follows the same vein as that of Theorem 1. Consider the policy gradient update with estimated \( (\hat{A}, \hat{B})\)

\[ \begin{equation} K' = K - 2\eta \hat{E} \end{equation} \]

(56)

and the update with the ground-truth \( (A,B)\)

\[ \begin{equation} K'' = K - 2\eta {E}, \end{equation} \]

(57)

where \( E\) is defined in (43). We first quantify the difference between the exact and approximated natural gradient \( \hat{E}-E\) , and then quantify \( C(K'')-C(K')\) .

Lemma 23

Let \( K \in \mathcal{S}\) . There exists a polynomial \( p_9=\text{poly}(C(K)/\underline{\sigma}(Q), \|A\|, \|B\|, \|R\|, 1/\underline{\sigma}(R))\) such that if \( \delta / \gamma \leq p_2\) , then \( \| \hat{E}-E \| \leq {p_9\delta}/{\gamma}. \)

The proof follows from that of Lemma 12 and is omitted.

Lemma 24

Let \( K \in \mathcal{S}\) and \( p_{10} = {1}/{\|R+B^{\top}PB\|}\) . If

\[ \frac{\delta}{\gamma} \leq p_2 ~~\text{and}~~ \eta \leq \min\left\{\frac{p_4\gamma}{2p_9\delta}, p_{10}\right\}, \]

then it holds that \( \left| C(K'') - C(K')\right| \leq {\eta p_6p_9 \delta}/{\gamma}. \)

Proof

By our hypothesis and Lemma 23, it holds that \( \|K' - K''\| = 2\eta \| \hat{E}-E\|\leq {2\eta p_9 \delta}/{\gamma} \leq p_4. \) By taking \( \eta \leq p_{10}\) , it follows from [18, Theorem 7] that the update (57) returns a stabilizing policy \( K''\) . Furthermore, we can apply Lemma 13 to obtain that \( |C(K') - C(K'')| \leq p_6 \|K' - K''\| \leq {\eta p_6p_9 \delta}/{\gamma}, \) which completes the proof.

By [18, Lemma 15], if \( \eta \leq p_{10}\) , then the progress of the exact natural gradient descent (57) satisfies

\[ C(K'') - C(K) \leq -\frac{\eta}{2\mu} (C(K) - C^*). \]

Together with Lemma 24, the progress of (56) is

\[ \begin{equation} C(K'') - C(K) \leq -\frac{\eta}{2\mu} (C(K) - C^*) + \frac{\eta p_6p_9 \delta}{\gamma}. \end{equation} \]

(58)

Define \( \overline{C} = C^* + C(K_0) + 1 + \frac{1}{2\|R\|\mu}\) . Then, we show that the cost is uniformly upper bounded by \( \overline{C}\) .

Lemma 25 (Boundedness of the cost)

If

\[ \begin{equation} \frac{\delta_t}{\gamma_t} \leq \min\left\{\underline{p}_2, \frac{1}{2\bar{p}_6 \bar{p}_9\mu}\right\} ~\text{and}~ \eta \leq \min\left\{\frac{\underline{p}_4\gamma_t}{2\bar{p}_9\delta_t}, \underline{p}_{10} \right\}, \end{equation} \]

(59)

then \( C(K_t)\) of PGAC with natural gradient has a uniform upper bound, i.e., \( C(K_t) \leq \overline{C}, \forall t\geq t_0\) .

Proof

The proof is based on mathematical induction. Clearly, the bound holds at \( t = t_0\) , i.e., \( C(K_{t_0}) \leq \overline{C}\) . Suppose that \( C(K_{t}) \leq \overline{C}\) for \( t > t_0\) . Next, we show \( C(K_{t+1}) \leq \overline{C}\) .

By (58) and our hypothesis \( C(K_t) \leq \overline{C}\) , the gradient descent (32) yields \( C(K_{t+1}) - C(K_t) \leq -\frac{\eta}{2\mu} (C(K_t) - C^*) + \eta \bar{p}_6\bar{p}_9 \frac{\delta_{t+1}}{\gamma_{t+1}} \leq -\frac{\eta}{2\mu} (C(K_t) - C^*) + \frac{\eta}{2\mu}, \) where the last inequality follows from our condition on \( \delta_t/\gamma_t\) .

Consider two cases. If \( C(K_t) \geq C^* +1\) , then

\[ C(K_{t+1}) \leq C(K_t) - \frac{\eta}{2\mu} + \frac{\eta}{2\mu} = C(K_t) \leq \overline{C}. \]

Otherwise, if \( C(K_t) < C^* +1\) , then

\[ C(K_{t+1}) \leq C^* + 1 + \frac{\eta}{2\mu} \leq C^* + 1 + \frac{1}{2\|R\|\mu}\leq \overline{C}, \]

where the second inequality follows from \( \eta \leq \underline{p}_{10} \leq 1/\|R\|\) . The proof is completed.

Then, under the condition (59), it holds that

\[ \begin{align*} C(K_{t}) - C^* &\leq (1-\frac{\eta}{2\mu})^{t-t_0} (C(K_{t_0}) - C^*) \\ &+ \eta \bar{p}_6\bar{p}_9\sum_{i = t_0}^{t-1}(1-\frac{\eta}{2\mu})^{t-1-i} \frac{1}{\text{SNR}_i}. \end{align*} \]

Lemma 26

There exists \( \bar{p}_{11}\) as a function of \( \overline{C}\) such that, if

\[ \begin{equation} \begin{aligned} &\frac{\delta_t}{\gamma_t} \leq \min\left\{\underline{p}_2, \frac{1}{2\bar{p}_6 \bar{p}_9\mu}\right\},~ \text{and}\\ &\eta \leq \min\left\{\frac{\underline{p}_4\gamma_t}{2\bar{p}_9\delta_t}, \underline{p}_{10}, \frac{\underline{p}_4}{2\bar{p}_{11}}, \frac{\underline{\alpha}}{8\bar{\kappa} ^2 \bar{p}_5\bar{p}_{11}}, \frac{1}{4\bar{p}_5\bar{p}_{11}} \right\}, \end{aligned} \end{equation} \]

(60)

then \( \{K_t\}\) is sequentially strong stable with parameters \( (\bar{\kappa}, \underline{\alpha})\) , where \( \bar{\kappa}, \underline{\alpha}\) are the quantities at \( \overline{C}\) . Moreover, the state is bounded as (50).

Proof

The proof follows the same vein as that of Lemma 17, where we require \( \|\Sigma_{t+1} - \Sigma_t\|\) to be sufficiently small. By Lemma 3, if \( \|K_{t+1} - K_t\| \leq \underline{p}_4\) , then \( \|\Sigma_{t+1} - \Sigma_t\| \leq \bar{p}_5 \|K_{t+1} - K_t\|\) . Thus, we need \( \|K_{t+1} - K_t\| = 2\eta \|\hat{E}_{t+1}\|\) to be small. To this end, we provide a bound for \( \|E_{t+1}\|\) . By Lemma 23, we have \( \|\hat{E}_{t+1} \| \leq \|E_{t+1}\| + {\bar{p}_9\delta_t}/{\gamma_t} \leq \|E_{t+1}\| + \bar{p}_9\underline{p}_2. \) Since \( C(K_t)\leq \overline{C}\) , the right-hand side has a uniform upper bounded denoted by \( \bar{p}_{11}\) . Thus, as long as \( \eta \leq \underline{p}_4/(2\bar{p}_{11})\) , we have \( \|K_{t+1} - K_t\| \leq \underline{p}_4\) . To ensure \( \|\Sigma_{t+1} - \Sigma_t\|<1/2\) , we need \( \eta \leq 1/(4\bar{p}_5\bar{p}_{11})\) . Furthermore, it follows that \( \|\Sigma_{t+1}^{-1}\Sigma_t\| \leq 1 + 2\bar{\kappa}^2 \|\Sigma_{t+1} - \Sigma_t\| \leq 1 + 2\bar{\kappa}^2 \bar{p}_5 \|K_{t+1}-K_t\| \leq 1 + 4\bar{\kappa}^2\bar{p}_5\bar{p}_{11}\eta\) . Since we also require \( 4\kappa^2p_5p_{11}\eta \leq \alpha/2\) , we let \( \eta \leq \alpha/(8\kappa^2p_5p_{11})\) . Combining all the bounds on \( \eta\) completes the proof.

B.3 Proof of Theorem 5

Let \( \eta = 1/2\) . Consider the Gauss-Newton update with estimated \( (\hat{A}, \hat{B})\)

\[ \begin{equation} K' = K - (R+\hat{B}^{\top}\hat{P}\hat{B})^{-1}\hat{E}, \end{equation} \]

(61)

and the update with the ground-truth \( (A,B)\)

\[ \begin{equation} K'' = K- (R+B^{\top}PB)^{-1}E. \end{equation} \]

(62)

where \( E\) is defined in (43). We first quantify the difference between the exact and approximated Gauss-Newton gradient, and then quantify \( C(K'')-C(K')\) .

Lemma 27

Let \( K \in \mathcal{S}\) . Then, there exist \( p_{12}\) as a monotonously decreasing function of \( C(K)\) and \( p_{13}=\text{poly}(C(K)/\underline{\sigma}(Q), \|A\|, \|B\|, \|R\|, 1/\underline{\sigma}(R))\) , such that if \( \delta / \gamma \leq p_{12}\) , then \( \| (R+\hat{B}^{\top}\hat{P}\hat{B})^{-1}\hat{E}-(R+B^{\top}PB)^{-1}E \| \leq {p_{13}\delta}/{\gamma}.\)

The proof follows from that of Lemma 12 and is omitted.

Lemma 28

Let \( K \in \mathcal{S}\) . If \( {\delta}/{\gamma} \leq \min\{p_{12}, {p_4}/{p_{13}} \}, \) then it holds that \( | C(K'') - C(K')| \leq { p_6p_{13} \delta}/{\gamma}. \)

Proof

By our hypothesis and Lemma 27, it holds that \( \|K' - K''\| = \| (R+\hat{B}^{\top}\hat{P}\hat{B})^{-1}\hat{E}-\left(R+B^{\top}PB\right)^{-1}E \| \leq {p_{13} \delta}/{\gamma} \leq p_4. \) Then, we can apply Lemma 13 to obtain that \( |C(K') - C(K'')| \leq p_6 \|K' - K''\| \leq { p_6p_{13}\delta}/{\gamma}, \) which completes the proof.

By [18, Lemma 8], the Gauss-Newton method (62) yields

\[ C(K'') - C(K) \leq -\frac{\eta}{\|\Sigma^*\|} (C(K) - C^*). \]

Together with Lemma 28, the progress of (61) is

\[ \begin{equation} C(K') - C(K) \leq -\frac{\eta}{\|\Sigma^*\|} (C(K) - C^*) + \frac{ p_6p_{13}\delta}{\gamma}. \end{equation} \]

(63)

Let \( \overline{C} = C^* + c + \frac{c}{2\|\Sigma^*\|}\) for any constant \( c>0\) . Then, we have the following result.

Lemma 29 (Boundedness of the cost)

If

\[ \begin{equation} \frac{\delta_t}{\gamma_t} \leq \min\left\{\underline{p}_{12}, \frac{\underline{p}_4}{\bar{p}_{13}}, \frac{c}{2\bar{p}_6\bar{p}_{13}\|\Sigma^*\|} \right\}, ~ C(K_{t_0} ) \leq \overline{C}, \end{equation} \]

(64)

then \( C(K_t)\) of Algorithm 3 satisfies \( C(K_t) \leq \overline{C}\) .

Proof

The proof is based on mathematical induction. By our condition on \( C(K_0)\) , the bound holds at \( t = 0\) . Suppose that \( C(K_t) \leq \overline{C}\) . Next, we show \( C(K_{t+1}) \leq \overline{C}\) .

By (63) and our hypothesis \( C(K_t) \leq \overline{C}\) , the Gauss-Newton method (34) yields

\[ \begin{align*} C(K_{t+1}) - C(K_t) &\leq -\frac{1}{2\|\Sigma^*\|} (C(K_t) - C^*) + \frac{ \bar{p}_6\bar{p}_{13}\delta_{t+1}}{\gamma_{t+1}} \\ &\leq -\frac{1}{2\|\Sigma^*\|} (C(K_t) - C^*) + \frac{c}{2\|\Sigma^*\|} , \end{align*} \]

where the last inequality follows from our condition on \( \delta_t/\gamma_t\) .

Consider two cases. If \( C(K_t) \geq C^* +c\) , then \( C(K_{t+1}) \leq C(K_t) - {\eta}/{(2\mu)} + {\eta}/{(2\mu)} = C(K_t) \leq \overline{C}. \) Otherwise, if \( C(K_t) < C^* +c\) , then \( C(K_{t+1}) \leq C^* + c + {c}/{(2\|\Sigma^*\|)} = \overline{C}. \) The proof is completed.

Lemma 30

There exists a constant \( c\) and \( \bar{p}_{13}\) as a function of \( \overline{C}\) such that, if

\[ \begin{equation} \frac{\delta_t}{\gamma_t} \leq \min\left\{\underline{p}_{12}, \frac{\underline{p}_4}{\bar{p}_{13}}, \frac{c}{2\bar{p}_6\bar{p}_{13}\|\Sigma^*\|}, \frac{c}{\bar{p}_{13}} \right\}, \end{equation} \]

(65)

then \( \{K_t\}\) of Algorithm 3 is sequentially strong stable with parameters \( (\bar{\kappa}, \underline{\alpha})\) , where \( \bar{\kappa}, \underline{\alpha}\) are the quantities at \( \overline{C}\) .

Proof

The proof follows the same vein as that of Lemma 17, where we require \( \|\Sigma_{t+1} - \Sigma_t\|\) to be sufficiently small. By Lemma 3, if \( \|K_{t+1} - K_t\| \leq \underline{p}_4\) , then \( \|\Sigma_{t+1} - \Sigma_t\| \leq \bar{p}_5 \|K_{t+1} - K_t\|\) . Thus, we need \( \|K_{t+1} - K_t\| = \|(R+\hat{B}_{t+1}^{\top}\hat{P}_{t+1}\hat{B}_{t+1})^{-1}\hat{E}_{t+1}\|\) to be small. To this end, we provide a bound for \( \|(R+\hat{B}^{\top}\hat{P}\hat{B})^{-1}\hat{E}\|\) . By Lemma 27, we have \( \|(R+\hat{B}^{\top}\hat{P}\hat{B})^{-1}\hat{E} \| \leq \|(R+B^{\top}PB)^{-1}E\| + {\bar{p}_{13}\delta}/{\gamma}. \)

By [18, Lemma 11], it holds that \( \|(R+B^{\top}PB)^{-1}E\| \leq ( {(C(K)- C^*)}/\underline{{\sigma}(R)} )^{\frac{1}{2}}. \) Since \( C(K_t)\leq \overline{C}\) , we have

\[ \begin{equation} \begin{aligned} \|K_{t+1} - K_t\| &= \|(R+\hat{B}_{t+1}^{\top}\hat{P}_{t+1}\hat{B}_{t+1})^{-1}\hat{E}_{t+1}\| \\ &\leq \left( \frac{c}{\underline{\sigma}(R)} + \frac{c}{2\|\Sigma^*\|\underline{\sigma}(R)} \right)^{\frac{1}{2}} + c. \end{aligned} \end{equation} \]

(66)

Thus, as long as the right-hand side of the this inequality is smaller than \( \underline{p}_4\) , we have \( \|K_{t+1} - K_t\| \leq \underline{p}_4\) . To ensure \( \|\Sigma_{t+1} - \Sigma_t\|<1/2\) , we additionally require \( \|K_{t+1} - K_t\| \leq 1/(2\bar{p}_5)\) . Since \( \|\Sigma_{t+1}^{-1}\Sigma_t\| \leq 1 + 2\bar{\kappa}^2 \|\Sigma_{t+1} - \Sigma_t\| \leq 1 + 2\bar{\kappa}^2 \bar{p}_5 \|K_{t+1}-K_t\|\) , to ensure sequential stability we need \( 2\bar{\kappa}^2 \bar{p}_5 \|K_{t+1}-K_t\|\leq \underline{\alpha}/2\) . Noting that \( \|K_{t+1} - K_t\|\) is upper bounded as (66), these conditions can be ensured for a sufficiently small \( c\) .

The rest of the proof follows the same vein as that of Theorem 1 and is omitted.

C Proof of Theorem 6

The idea is to show that, under the given selection of \( \lambda_t\) , the difference between the policy gradient \( \nabla{C}(K)\) and the regularized one \( \nabla\hat{C}_{\lambda}(K)\) scales linearly with \( \delta/\gamma\) . Hence, the results follow the proof of Theorem 1.

Lemma 31

Let \( K \in \mathcal{S}\) . Then, there exist \( p_{14}=\text{poly}(C(K)/\underline{\sigma}(Q), \|A\|, \|B\|, \|R\|, 1/\underline{\sigma}(R))\) and a constant \( c_0>0\) such that, if \( \delta / \gamma \leq p_2\) and \( \lambda \leq c_0\delta \gamma\) , then \( \| \nabla\hat{C}_{\lambda}(K) - \nabla{C}(K) \| \leq {p_{14}\delta}/{\gamma}. \)

Proof

The difference in the gradient can be bounded by

\[ \begin{equation} \begin{aligned} &\| \nabla\hat{C}_{\lambda}(K) - \nabla{C}(K) \| \\ &\leq \| \nabla\hat{C}_{\lambda}(K) - \nabla {C}_{\lambda}(K) \| + \| \nabla {C}_{\lambda}(K) - \nabla{C}(K) \|. \end{aligned} \end{equation} \]

(67)

We first bound the second term of (67). By Lemma 8, it holds \( \left\| \nabla {C}_{\lambda}(K) - \nabla {C}(K)\right\| = 2\lambda\|(\Phi^{-1})_{uu}K + B^{\top}\tilde{P}(A+BK) + (\Phi^{-1})_{ux}\|\|\Sigma\|, \) where \( \tilde{P}\) is the solution to \( \tilde{P} = (\Phi^{-1})_{xx} + K^{\top}(\Phi^{-1})_{uu}K + K^{\top}(\Phi^{-1})_{ux} + (\Phi^{-1})_{xu}K + (A+BK)^{\top}\tilde{P}(A+BK). \) For the first term, we notice that \( \|(\Phi^{-1})_{uu}\|\leq \|\Phi^{-1}\|\) and \( \|(\Phi^{-1})_{ux}\|\leq \|\Phi^{-1}\|\) , and \( \|\tilde{P}\| \leq \|\Sigma\|\|(\Phi^{-1})_{xx} + K^{\top}(\Phi^{-1})_{uu}K + K^{\top}(\Phi^{-1})_{ux} + (\Phi^{-1})_{xu}K\| \leq \|\Phi^{-1}\| \|\Sigma\| (1+ \|K\|^2 + 2\|K\|). \) Then, there exists \( p_{15}=\text{poly}(C(K)/\underline{\sigma}(Q), \|A\|, \|B\|, \|R\|, 1/\underline{\sigma}(R))\) such that \( \left\| \nabla {C}_{\lambda}(K) - \nabla {C}(K)\right\| \leq p_{15} \lambda \|\Phi^{-1}\| \leq p_{15}c_0 {\delta}/({\gamma}), \) where the last inequality follows from \( \|\Phi^{-1}\| = 1/\underline{\sigma}(\Phi) \leq 1/\gamma^2)\) and our condition on \( \lambda\) . Analogously, we can show that an upper bound of \( \| \nabla\hat{C}_{\lambda}(K) - \nabla {C}_{\lambda}(K) \|\) is linear in \( \delta/\gamma\) . Inserting these bounds in (67) completes the proof.

The rest of the proof follows the same vein as that of Theorem 1 and is omitted due to space limitation.

Feiran Zhao received the B.S. degree in Control Science and Engineering from the Harbin Institute of Technology, China, in 2018, and the Ph.D. degree in Control Science and Engineering from the Tsinghua University, China, in 2024. He held a visiting position at ETH Zürich. His research interests include policy optimization, data-driven control, adaptive control and their applications.
Figure 4. Feiran Zhao received the B.S. degree in Control Science and Engineering from the Harbin Institute of Technology, China, in 2018, and the Ph.D. degree in Control Science and Engineering from the Tsinghua University, China, in 2024. He held a visiting position at ETH Zürich. His research interests include policy optimization, data-driven control, adaptive control and their applications.
Florian Dörfler is a Professor at the Automatic Control Laboratory at ETH Zürich. He received his Ph.D. degree in Mechanical Engineering from the University of California at Santa Barbara in 2013, and a Diplom degree in Engineering Cybernetics from the University of Stuttgart in 2008. From 2013 to 2014 he was an Assistant Professor at the University of California Los Angeles. He has been serving as the Associate Head of the ETH Zürich Department of Information Technology and Electrical Engineering from 2021 until 2022. His research interests are centered around automatic control, system theory, and optimization. His particular foci are on network systems, data-driven settings, and applications to power systems. He is a recipient of the distinguished young research awards by IFAC (Manfred Thoma Medal 2020) and EUCA (European Control Award 2020). His team has received many best thesis and best paper awards in the top venues of control, power systems, power electronics, and circuits and systems. He is currently serving on the council of the European Control Association and as a senior editor of Automatica.
Figure 5. Florian Dörfler is a Professor at the Automatic Control Laboratory at ETH Zürich. He received his Ph.D. degree in Mechanical Engineering from the University of California at Santa Barbara in 2013, and a Diplom degree in Engineering Cybernetics from the University of Stuttgart in 2008. From 2013 to 2014 he was an Assistant Professor at the University of California Los Angeles. He has been serving as the Associate Head of the ETH Zürich Department of Information Technology and Electrical Engineering from 2021 until 2022. His research interests are centered around automatic control, system theory, and optimization. His particular foci are on network systems, data-driven settings, and applications to power systems. He is a recipient of the distinguished young research awards by IFAC (Manfred Thoma Medal 2020) and EUCA (European Control Award 2020). His team has received many best thesis and best paper awards in the top venues of control, power systems, power electronics, and circuits and systems. He is currently serving on the council of the European Control Association and as a senior editor of Automatica.
Alessandro Chiuso (Fellow, IEEE) is Professor with the Department of Information Engineering, Università di Padova. He received the &amp;ldquo;Laurea&amp;#34; degree summa cum laude in Telecommunication Engineering from the University of Padova in July 1996 and the Ph.D. degree (Dottorato di ricerca) in System Engineering from the University of Bologna in 2000. He has been a visiting scholar with the Dept. of Electrical Engineering, Washington University St. Louis and Post-Doctoral fellow with the Dept. Mathematics, Royal Institute of Technology, Sweden. He joined the University of Padova as an Assistant Professor in 2001, Associate Professor in 2006 and then Full Professor since 2017. He currently serves as Editor (System Identification and Filtering) for Automatica. He has served as an Associate Editor for Automatica, IEEE Transactions on Automatic Control, IEEE Transactions on Control Systems Technology, European Journal of Control and MCSS. He is chair of the IFAC Coordinating Committee on Signals and Systems. He has been General Chair of the IFAC Symposium on System Identification, 2021 and he is a Fellow of IEEE (Class 2022). His research interests are mainly at the intersection of Machine Learning, Estimation, Identification and their Applications, Computer Vision and Networked Estimation and Control.
Figure 1. Alessandro Chiuso (Fellow, IEEE) is Professor with the Department of Information Engineering, Università di Padova. He received the “Laurea" degree summa cum laude in Telecommunication Engineering from the University of Padova in July 1996 and the Ph.D. degree (Dottorato di ricerca) in System Engineering from the University of Bologna in 2000. He has been a visiting scholar with the Dept. of Electrical Engineering, Washington University St. Louis and Post-Doctoral fellow with the Dept. Mathematics, Royal Institute of Technology, Sweden. He joined the University of Padova as an Assistant Professor in 2001, Associate Professor in 2006 and then Full Professor since 2017. He currently serves as Editor (System Identification and Filtering) for Automatica. He has served as an Associate Editor for Automatica, IEEE Transactions on Automatic Control, IEEE Transactions on Control Systems Technology, European Journal of Control and MCSS. He is chair of the IFAC Coordinating Committee on Signals and Systems. He has been General Chair of the IFAC Symposium on System Identification, 2021 and he is a Fellow of IEEE (Class 2022). His research interests are mainly at the intersection of Machine Learning, Estimation, Identification and their Applications, Computer Vision and Networked Estimation and Control.

References

[1] Anuradha M Annaswamy and Alexander L Fradkov A historical perspective of adaptive control and learning Annual Reviews in Control 2021 52 18–41

[2] Anuradha M Annaswamy Adaptive control and intersections with reinforcement learning Annual Review of Control, Robotics, and Autonomous Systems 2023 6 65–93

[3] RF Drenick and RA Shahbender Adaptive servomechanisms Transactions of the American Institute of Electrical Engineers, Part II: Applications and Industry 1957 76 5 286–292

[4] Petros A Ioannou and Jing Sun Robust adaptive control PTR Prentice-Hall Upper Saddle River, NJ 1996 1

[5] Zhongsheng Hou and Shuangshuang Xiong On Model-Free Adaptive Control and Its Stability Analysis IEEE Transactions on Automatic Control 2019 64 11 4555-4569 10.1109/TAC.2019.2894586

[6] George Zames Adaptive control: Towards a complexity-based general theory Automatica 1998 34 10 1161–1167

[7] Frank L Lewis and Draguna Vrabie Reinforcement learning and adaptive dynamic programming for feedback control IEEE Circuits and Systems Magazine 2009 9 3 32–50

[8] Tao Bian and Zhong-Ping Jiang Value iteration and adaptive dynamic programming for data-driven adaptive optimal control design Automatica 2016 71 348–360

[9] Horia Mania and Stephen Tu and Benjamin Recht Certainty Equivalence is Efficient for Linear Quadratic Control Advances in Neural Information Processing Systems 2019 32 Curran Associates, Inc.

[10] Yiwen Lu and Yilin Mo Almost Surely \( \sqrt{T}\) Regret Bound for Adaptive LQR IEEE Transactions on Automatic Control (early access) 2025

[11] Max Simchowitz and Dylan Foster Naive exploration is optimal for online LQR International Conference on Machine Learning 2020 8937–8948 PMLR

[12] Alon Cohen and Tomer Koren and Yishay Mansour Learning Linear-Quadratic Regulators Efficiently with only \( \sqrt{T}\) Regret International Conference on Machine Learning 2019 1300–1309 PMLR

[13] Jafar Abbaszadeh Chekan and Cedric Langbort Fully Adaptive Regret-Guaranteed Algorithm for Control of Linear Quadratic Systems arXiv preprint arXiv:2406.07746 2024

[14] Iasson Karafyllis and Maria Kontorinaki and Miroslav Krstic Adaptive control by regulation-triggered batch least squares IEEE Transactions on Automatic Control 2019 65 7 2842–2855

[15] Bowen Song and Andrea Iannelli Robustness of Online Identification-based Policy Iteration to Noisy Data arXiv preprint arXiv:2504.07627 2025

[16] Brian D.O. Anderson Failures of adaptive control theory and their resolution Commun. Inf. Syst. 2005 5 1 1-20

[17] Bin Hu and Kaiqing Zhang and Na Li and Mehran Mesbahi and Maryam Fazel and Tamer Başar Toward a Theoretical Foundation of Policy Optimization for Learning Control Policies Annual Review of Control, Robotics, and Autonomous Systems 2023 6 123–158

[18] Maryam Fazel and Rong Ge and Sham Kakade and Mehran Mesbahi Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator International Conference on Machine Learning 2018 1467–1476

[19] Hesameddin Mohammadi and Armin Zare and Mahdi Soltanolkotabi and Mihailo R. Jovanović Convergence and Sample Complexity of Gradient Methods for the Model-Free Linear Quadratic Regulator Problem IEEE Transactions on Automatic Control 2022 67 5 2435-2450 10.1109/TAC.2021.3087455

[20] Feiran Zhao and Keyou You and Tamer Başar Global Convergence of Policy Gradient Primal-Dual Methods for Risk-Constrained LQRs IEEE Transactions on Automatic Control 2023 68 5 2934-2949 10.1109/TAC.2023.3234176

[21] Feiran Zhao and Florian Dörfler and Alessandro Chiuso and Keyou You Data-Enabled Policy Optimization for Direct Adaptive Learning of the LQR arXiv preprint arXiv:2401.14871 2024

[22] Anuradha M Annaswamy and Anubhav Guha and Yingnan Cui and Sunbochen Tang and Peter A Fisher and Joseph E Gaudio Integration of adaptive control and reinforcement learning for real-time control and learning IEEE Transactions on Automatic Control 2023 68 12 7740–7755

[23] Ivan Markovsky and Florian Dörfler Behavioral systems theory in data-driven analysis, signal processing, and control Annual Reviews in Control 2021 52 42–64

[24] Jeremy Coulson and John Lygeros and Florian Dörfler Data-enabled predictive control: In the shallows of the DeePC 18th European Control Conference (ECC) 2019 307–312

[25] Claudio De Persis and Pietro Tesi Formulas for data-driven control: Stabilization, optimality, and robustness IEEE Transactions on Automatic Control 2019 65 3 909–924

[26] Florian Dörfler and Pietro Tesi and Claudio De Persis On the Certainty-Equivalence Approach to Direct Data-Driven LQR Design IEEE Transactions on Automatic Control 2023 68 12 7989-7996 10.1109/TAC.2023.3253787

[27] Florian Dörfler and Pietro Tesi and Claudio De Persis On the Role of Regularization in Direct Data-Driven LQR Control 61st IEEE Conference on Decision and Control (CDC) 2022 1091-1098 10.1109/CDC51059.2022.9992770

[28] Alessandro Chiuso and Marco Fabris and Valentina Breschi and Simone Formentin Harnessing uncertainty for a separation principle in direct data-driven predictive control Automatica 2025 173 112070

[29] Feiran Zhao and Florian Dörfler and Keyou You Data-Enabled Policy Optimization for the Linear Quadratic Regulator 62nd IEEE Conference on Decision and Control (CDC) 2023 6160-6165 10.1109/CDC49753.2023.10383470

[30] Feiran Zhao and Ruohan Leng and Linbin Huang and Huanhai Xin and Keyou You and Florian Dörfler Direct Adaptive Control of Grid-Connected Power Converters via Output-Feedback Data-Enabled Policy Optimization arXiv preprint arXiv:2411.03909 2024

[31] Xuerui Wang and Feiran Zhao and Andres Jurisson and Florian Dörfler and Roy S. Smith Unified Aeroelastic Flutter and Loads Control via Data-Enabled Policy Optimization IEEE Transactions on Aerospace and Electronic Systems 2025 (early access) 1-12 10.1109/TAES.2025.3566351

[32] Niklas Persson and Feiran Zhao and Mojtaba Kaheni and Florian Dörfler and Alessandro V Papadopoulos An Adaptive Data-Enabled Policy Optimization Approach for Autonomous Bicycle Control arXiv preprint arXiv:2502.13676 2025

[33] Gary Hewer An iterative technique for the computation of the steady state gains for the discrete optimal regulator IEEE Transactions on Automatic Control 1971 16 4 382–384

[34] Feiran Zhao and Alessandro Chiuso and Florian Dörfler Regularization for Covariance Parameterization of Direct Data-Driven LQR Control arXiv preprint arXiv:2503.02985 2025

[35] Sarah Dean and Horia Mania and Nikolai Matni and Benjamin Recht and Stephen Tu On the sample complexity of the linear quadratic regulator Foundations of Computational Mathematics 2020 20 4 633–679

[36] Brian DO Anderson and John B Moore Optimal control: linear quadratic methods Courier Corporation 2007

[37] Jan C Willems and Paolo Rapisarda and Ivan Markovsky and Bart LM De Moor A note on persistency of excitation Systems & Control Letters 2005 54 4 325–329

[38] Henk J Van Waarde and Jaap Eising and M Kanat Camlibel and Harry L Trentelman The informativity approach: To data-driven analysis and control IEEE Control Systems Magazine 2023 43 6 32–66

[39] Jeremy Coulson and Henk J Van Waarde and John Lygeros and Florian Dörfler A quantitative notion of persistency of excitation and the robust fundamental lemma IEEE Control Systems Letters 2022 7 1243–1248

[40] Jingjing Bu and Afshin Mesbahi and Maryam Fazel and Mehran Mesbahi LQR arXiv preprint arXiv:1907.08921 2019

[41] Xiong Zeng and Laurent Bako and Necmiye Ozay Noise Sensitivity of the Semidefinite Programs for Direct Data-Driven LQR arXiv preprint arXiv:2412.19705 2024

I am normally hidden by the status bar