Scroll back to top

    Eigenvalue placement

    Overview

    Apply the input

    to the open-loop system

    and you get the closed-loop system

    Suppose we want to choose to put the eigenvalues of the closed-loop system, i.e., the eigenvalues of the matrix , at given locations. We will derive a formula called Ackermann’s method that allows us to do this when possible, and will show how to decide when doing so is impossible.

    Heads up!

    This entire discussion will be based on the assumption that there is exactly one input (i.e., that is a vector of length , or that ). Ackermann’s method, in particular, cannot be used for eigenvalue placement when there is more than one input.

    However, other similar methods — for example, the method of Tits and Yang (“Globally convergent algorithms for robust pole assignment by state feedback,” IEEE Transactions on Automatic Control, 41:1432-1452, 1996), as implemented by scipy.signal.place_poles in python — can be used when there are multiple inputs. Our result about when eigenvalue placement is possible — i.e., about when a system is “controllable” — also generalizes to systems with multiple inputs, although it becomes harder to prove.

    Eigenvalues are invariant to coordinate transformation

    Consider the system

    Suppose we define a new state variable so that

    for some invertible matrix , and so

    by differentiation. We have called this process “coordinate transformation” — it is exactly the same process we used for diagonalization when establishing our result about asymptotic stability. Plug these two things into our original state-space model and we get

    Solve for and we get the equivalent state-space model

    Finding a solution to this transformed system allows us to recover a solution

    to the original system. We would like to know if these two solutions “behave” the same way. In particular, we would like to know if the eigenvalues of are the same as the eigenvalues of .

    First, let’s look at the eigenvalues of . We know that they are the roots of

    Second, let’s look at the eigenvalues of . We know that they are the roots of

    We can play a trick. Notice that

    and so

    It is a fact that

    for any square matrices and . Applying this fact, we find

    It is another fact that

    Applying this other fact, we find

    In summary, we have established that

    and so the eigenvalues of and are the same. The consequence is, if you design state feedback for the transformed system, you’ll recover the behavior you want for the original system. In particular, suppose you apply the input

    to the transformed system and choose to place the eigenvalues of in given locations. Applying the input

    to the original system, i.e., choosing

    will result in placing the eigenvalues of at these same locations. The reason this is important is that it is often easier to choose than to choose . The process of diagonalization was important for a similar reason.

    Controllable canonical form

    In the previous section, we showed that eigenvalues are invariant to coordinate transformation. The next question is what coordinates are useful for control design. The answer to that question turns out to be something called controllable canonical form.

    A system with states and input is in controllable canonical form if it looks like

    where

    Notice that

    • is a matrix of size ,
    • is a matrix of size ,
    • is a vector of length , and
    • is a vector of length .

    It is a fact that the characteristic equation of this system is given by

    It is easy to see that this formula is true for and . In particular:

    • If , then:
    • If , then:

    There are a variety of ways to prove that this same formula is true in general. Applying the general formula to compute the matrix determinant, for example, we would find:

    where each matrix is upper-triangular with in diagonal entries and in diagonal entries. Since the determinant of an upper-triangular matrix is the product of its diagonal entries, we have

    Plug this in, and our result follows. Now, the reason that controllable canonical form is useful is that if we choose the input

    for some choice of gains

    then the ” matrix” of the closed-loop system is

    The characteristic equation of this closed-loop system, computed in the same way as for , is

    If you want this characteristic equation to look like

    then it’s obvious what gains you should choose

    So, if you have a system in controllable canonical form, then it is easy to choose gains that make the characteristic equation of the closed-loop system look like anything you want (i.e., to put the closed-loop eigenvalues anywhere you want). In other words, it is easy to do control design.

    Putting a system in controllable canonical form

    We have seen that controllable canonical form is useful. Now we’ll see how to put a system in this form. Suppose we have a system

    and we want to choose an invertible matrix so that if we define a new state variable by

    then we can rewrite the system as

    where

    are in controllable canonical form. The trick is to look at the so-called controllability matrix that is associated with the transformed system:

    We will talk more later about the controllability matrix — for now, notice that

    You see the pattern here, I’m sure. The result is:

    where

    is the controllability matrix associated with the original system.

    There are three things to note:

    • and are things that you know — you have a description of the original system, as always — so you can compute .

    • and are also things that you know — the values in the top row of are the coefficients of the characteristic polynomial of the matrix — so you can compute .

    • is a square matrix — it has columns , , and so forth, all of which have size . So, if has non-zero determinant, then you can find its inverse.

    As a consequence, you can solve for the matrix :

    Now, suppose you design a control policy for the transformed system:

    Remember, you can do this easily, because the transformed system is in controllable canonical form. We can compute the equivalent control policy, that would be applied to the original system:

    In particular, if we choose

    then we get the behavior that we want. Again, we emphasize that this only works if is invertible, and that is only invertible if .

    A systematic process for control design

    Apply the input

    to the open-loop system

    and you get the closed-loop system

    Suppose we want to choose to put the eigenvalues of the closed-loop system at

    Using the results of the previous sections, we know we can do this as follows:

    • Compute the characteristic equation that we want:

    • Compute the characteristic equation that we have:

    • Compute the controllability matrix of the original system (and check that ):

    • Compute the controllability matrix of the transformed system:

      where

    • Compute the gains for the transformed system:

    • Compute the gains for the original system:

    And we’re done! This process is easy to implement, without any symbolic computation. Remember, although this method only works for systems with exactly one input (i.e., when ), similar methods work for systems with multiple inputs — in python, use scipy.signal.place_poles.

    How to decide when eigenvalue placement is possible

    We say that a system is controllable if eigenvalue placement is possible. We have seen eigenvalue placement with Ackermann’s method (for the special case when ) is only possible when the controllability matrix is invertible. Here is a generalization of that same result to any system:

    Controllability

    The system

    is controllable if and only if the controllability matrix

    is full rank, where is the number of states.

    Let’s break this statement down.

    First, suppose there is only one input, so . In that case, has size and has size . Therefore, is a square matrix of size , so being full rank simply means that it is invertible (i.e., that ).

    Now, suppose there is more than one input, so . Then, has size . The matrix then has size , and is no longer square. Although we can’t invert a non-square matrix, we can still find its rank. In particular, it is easy to find the rank of a non-square matrix in python using numpy.linalg.matrix_rank. We say that is full rank if and only if its rank is .

    If is full rank, the system is controllable, and eigenvalue placement will work. If is not full rank, the system is not controllable, and eigenvalue placement will not work.

    We can actually say a little more than this. It turns out that if the controllability matrix is not full rank, then although we cannot place all of the system’s eigenvalues, we can place some of these eigenvalues, while the others will remain in the same location no matter what gain matrix we choose. The eigenvalues we can place are often called “controllable eigenvalues” or “controllable modes,” while the eigenvalues we can not place are often called “uncontrollable eigenvalues” or “uncontrollable modes.” If the rank of is , then we can place eigenvalues, and eigenvalue is uncontrollable. If the rank of is , then we can place eigenvalues, and eigenvalues are uncontrollable. And so forth.