Scroll back to top

    Reference tracking

    Overview

    The controllers we have designed so far make the state converge to zero — that is, they make

    Suppose we want the state to converge to something else — that is, to make

    We will see that, under certain conditions, this is easy to do.

    Reference tracking with full state feedback

    Consider the dynamic model

    where . Suppose we linearize this model about some equilibrium point to produce the state-space model

    where

    Suppose we design linear state feedback

    that would make the closed-loop system

    asymptotically stable — that is, that would make

    Denote the standard basis for by

    Suppose there is some index that satisfies

    That is, suppose the function is constant in — or, does not vary with — the ‘th element of . Then, the following three things are true:


    Invariance of equilibrium point. Since

    then is also an equilibrium point for any .


    Invariance of error in approximation of the dynamic model. The linear model

    is an approximation to the nonlinear model

    The amount of error in this approximation is

    We will show that this approximation error is constant in — or, does not vary with — the ‘th element of . Before we do so, we will prove that

    First, we show that the ‘th column of is zero:

    Next, denote the columns of by

    Then, we compute

    Now, for the approximation error:

    What this means is that our state-space model is just as accurate near as it is near the equilibrium point .


    Invariance of control. Suppose we implement linear state feedback with reference tracking:

    where

    for any . Let’s assume (for now) that is constant, and so is also constant. What will converge to in this case? Let’s find out. First, we define the error

    and note that

    Second, we derive an expression for the closed-loop system in terms of this error:

    This means that

    or equivalently that

    so long as all eigenvalues of have negative real part — exactly the same conditions under which the closed-loop system without reference tracking would have been asymptotically stable.

    Reference tracking with full state feedback

    Consider a system

    that satisfies

    Linearize this system about an equilibrium point to produce the state-space model

    where

    Apply linear state feedback with reference tracking as

    where

    for any . Then,

    if and only if all eigenvalues of have negative real part.

    Reference tracking with partial state feedback

    Consider the system

    where . Suppose we linearize this model about some equilibrium point to produce the state-space model

    where

    Suppose we design a controller

    and observer

    that would make the closed-loop system

    asymptotically stable — that is, that would make

    where

    Suppose, as for reference tracking with full state feedback, that there is some index for which

    This implies invariance of equilibrium point and invariance of error in approximation of the dynamic model, just like before. Suppose it is also true that, for some constant vector , the sensor model satisfies

    Then, the following two more things are true:


    Invariance of error in approximation of the sensor model. The linear model

    is an approximation to the nonlinear model

    The amount of error in this approximation is

    We will show that this approximation error is constant in — or, does not vary with — the ‘th element of . First, we show that the ‘th column of is :

    Next, denote the columns of by

    Then, we compute

    Now, for the approximation error:

    What this means is that our state-space model is just as accurate near as it is near the equilibrium point .


    Invariance of control. Suppose we implement linear state feedback with reference tracking:

    where

    for any . Let’s assume (for now) that is constant, and so is also constant. What will converge to in this case? Let’s find out. First, we define the state error

    and the state estimate error

    Second, we derive an expression for the closed-loop system in terms of these errors. Let’s start with the state error:

    Now, for the state estimate error:

    Putting these together, we have

    This means that

    or equivalently that

    so long as all eigenvalues of and all eigenvalues of have negative real part — exactly the same conditions under which the closed-loop system without reference tracking would have been asymptotically stable.

    Reference tracking with partial state feedback

    Consider a system

    that satisfies

    and

    for some constant vector . Linearize this system about some equilibrium point to produce the state-space model

    where

    Apply the observer

    and the controller (with reference tracking)

    where

    for any . Then,

    if and only if all eigenvalues of and all eigenvalues of have negative real part.

    Tracking more than one element of the state

    Our discussion of reference tracking with full state feedback and with partial state feedback has assumed that we want to track desired values of exactly one element of the nonlinear state . All of this generalizes immediately to the case where we want to track desired values of more than one element of .

    In particular, suppose there are two indices that satisfy

    and, in the case of partial state feedback, that also satisfy

    for constant vectors and . Then, choosing

    would produce the same results that were derived previously.

    Choosing the desired state to keep errors small

    Our proof that tracking “works” relies largely on having shown that our state-space model is just as accurate near as it is near the equilibrium point . Equivalently, it relies on having shown that this model is just as accurate near as it is near .

    Despite this fact, it is still important to keep the state error

    small. The reason is that the input is proportional to the error — for full state feedback as

    and for partial state feedback as

    So, if error is large, the input may exceed bounds (e.g., limits on actuator torque). Since our state-space model does not include these bounds, it may be inaccurate when inputs are large.

    As a consequence, it is important in practice to choose so that the state error

    remains small. Here is one common way to do this, both in the case of full state feedback and partial state feedback.

    Choosing the desired state in the case of full state feedback

    Suppose is the state you actually want to achieve. Suppose is an upper bound on the state error that you are willing to tolerate. Then, choose

    Choosing the desired state in the case of partial state feedback

    Suppose is the state you actually want to achieve. Suppose is an upper bound on the state error that you are willing to tolerate. Then, choose