cBook‎ > ‎

(Non-)Holonomic systems (constrains)

The Frobenius Theory

posted Nov 10, 2014, 9:47 PM by Javad Taghia   [ updated Nov 10, 2014, 9:47 PM ] The Frobenius Theorem

The Lie bracket is the only tool needed to determine whether a system is completely integrable (holonomic) or nonholonomic (not integrable). Suppose that a system of the form (15.53) is given. Using the $ m$ system vector fields $ h_1$$ \ldots $$ h_m$ there are$ {(^{m}_{2})}$ Lie brackets of the form $ [h_i,h_j]$ for $ i < j$ that can be formed. A distribution $ {\triangle }$ is called involutive [133] if for each of these brackets there exist $ m$ coefficients $ c_k \in {\mathbb{R}}$ such that

$\displaystyle [h_i,h_j] = \sum_{k = 1}^m c_k h_k .$(15.86)

In other words, every Lie bracket can be expressed as a linear combination of the system vector fields, and therefore it already belongs to $ {\triangle }$. The Lie brackets are unable to escape $ {\triangle }$ and generate new directions of motion. We did not need to consider all $ n^2$ possible Lie brackets of the system vector fields because it turns out that $ [h_i,h_j]=-[h_j,h_i]$ and consequently $ [h_i,h_i] = 0$. Therefore, the definition of involutive is not altered by looking only at the $ {(^{m}_{2})}$ pairs.

If the system is smooth and the distribution is nonsingular, then the Frobenius theorem immediately characterizes integrability:

A system is completely integrable if and only if it is involutive.
Proofs of the Frobenius theorem appear in numerous differential geometry and control theory books [133,156,478,846]. There also exist versions that do not require the distribution to be nonsingular.

Determining integrability involves performing Lie brackets and determining whether (15.86) is satisfied. The search for the coefficients can luckily be avoided by using linear algebra tests for linear independence. The $ n \times m$ matrix $ H(x)$, which was defined in (15.56), can be augmented into an $ n \times (m+1)$ matrix $ H'(x)$ by adding $ [h_i,h_j]$ as a new column. If the rank of $ H'(x)$ is $ m+1$ for any pair $ h_i$ and $ h_j$, then it is immediately known that the system is nonholonomic. If the rank of $ H'(x)$ is $ m$ for all Lie brackets, then the system is completely integrable. Driftless linear systems, which are expressed as $ {\dot x}= B u$ for a fixed matrix $ B$, are completely integrable because all Lie brackets are zero.

Example 15..11 (The Differential Drive Is Nonholonomic)   For the differential drive model in (15.54), the Lie bracket $ [f,g]$ was determined in Example 15.9 to be $ [ \sin\theta \;\; -\cos\theta \;\; 0]^T$. The matrix $ H'(q)$, in which $ q = (x,y,\theta)$, is
$\displaystyle H'(q) = \begin{pmatrix}\cos \theta & 0 & \sin\theta  \sin \theta & 0 & -\cos\theta  0 & 1 & 0  \end{pmatrix} .$(15.87)

The rank of $ H'(q)$ is $ 3$ for all $ q \in {\cal C}$ (the determinant of $ H'(q)$ is $ 1$). Therefore, by the Frobenius theorem, the system is nonholonomic. $ \blacksquare$ 
Example 15..12 (The Nonholonomic Integrator Is Nonholonomic)   We would hope that the nonholonomic integrator is nonholonomic. In Example 15.10, the Lie bracket was determined to be $ [ 0 \;\; 0 \;\; 2]^T$. The matrix $ H'(q)$ is
$\displaystyle H'(q) = \begin{pmatrix}1 & 0 & 0  0 & 1 & 0  -x_2 & x_1 & 2  \end{pmatrix} ,$(15.88)

which clearly has full rank for all $ q \in {\cal C}$$ \blacksquare$ 
Example 15..13 (Trapped on a Sphere)   Suppose that the following system is given:
$\displaystyle \begin{pmatrix}{\dot x}_1  {\dot x}_2  {\dot x}_3 \end{pmatri... ... 0  \end{pmatrix} u_1 + \begin{pmatrix}x_3  0  -x_1  \end{pmatrix} u_2,$(15.89)

for which $ X = {\mathbb{R}}^3$ and $ U = {\mathbb{R}}^2$. Since the vector fields are linear, the Jacobians are constant (as in Example 15.10):
$\displaystyle \frac{\partial f}{\partial x} = \begin{pmatrix}0 & 1 & 0  -1 & 0 & 0  0 & 0 & 0 \end{pmatrix}$$\displaystyle \mbox {\;\;\; and \;\;\;} \frac{\partial g}{\partial x} = \begin{pmatrix}0 & 0 & 1   0 & 0 & 0   -1 & 0 & 0 \end{pmatrix} .$(15.90)

Using (15.80),
$\displaystyle \small \frac{\partial g}{\partial x} f - \frac{\partial f}{\parti... ...0  -x_1  \end{pmatrix} = \begin{pmatrix}0  x_3  -x_2  \end{pmatrix} .$(15.91)

This yields the matrix
$\displaystyle H'(x) = \begin{pmatrix}x_2 & -x_1 & 0  x_3 & 0 & -x_1  0 & x_3 & -x_2  \end{pmatrix} .$(15.92)

The determinant is zero for all $ x \in {\mathbb{R}}^3$, which means that $ [f,g]$ is never linearly independent of $ f$ and $ g$. Therefore, the system is completely integrable.15.10

The system can actually be constructed by differentiating the equation of a sphere. Let

$\displaystyle f(x) = x_1^2 + x_2^2 + x_3^2 - r^2 = 0 ,$(15.93)

and differentiate with respect to time to obtain
$\displaystyle x_1 {\dot x}_1 + x_2 {\dot x}_2 + x_3 {\dot x}_3 = 0,$(15.94)

which is a Pfaffian constraint. A parametric representation of the set of vectors that satisfy (15.94) is given by (15.89). For each $ (u_1,u_2) \in {\mathbb{R}}^2$, (15.89) yields a vector that satisfies (15.94). Thus, this was an example of being trapped on a sphere, which we would expect to be completely integrable. It was difficult, however, to suspect this using only (15.89). $ \blacksquare$ 

Steven M LaValle 2012-04-20

Example Unicycle

posted Nov 10, 2014, 6:21 PM by Javad Taghia   [ updated Nov 10, 2014, 6:24 PM ]

$\displaystyle [f,g]_i = \sum_{j=1}^{n} \left( f_j \frac{\partial g_i}{\partial x_j} - g_j \frac{\partial f_i}{\partial x_j} \right) .$

Lie Brackets

posted Nov 10, 2014, 6:06 PM by Javad Taghia   [ updated Nov 10, 2014, 6:11 PM ] Lie brackets

The key to establishing whether a system is nonholonomic is to construct motions that combine the effects of two action variables, which may produce motions in a direction that seems impossible from the system distribution. To motivate the coming ideas, consider the differential-drive model from (15.54). Apply the following piecewise-constant action trajectory over the interval $ [0,4 \Delta t]$:

$\displaystyle u(t) = \left\{ \begin{array}{ll} (1,0) & \mbox{ for $t \in [0,\De... ... (0,-1) & \mbox{ for $t \in [3 \Delta t,4 \Delta t]$ } .  \end{array}\right.$(15.71)

The action trajectory is a sequence of four motion primitives: 1) translate forward, 2) rotate forward, 3) translate backward, and 4) rotate backward.
Figure 15.16: (a) The effect of the first two primitives. (b) The effect of the last two primitives.
\begin{figure}\begin{center} \begin{tabular}{ccc} \psfig{file=figs/ddsideways.ep... ...dewaysb.eps,width=2.5in} \\ (a) & & (b) \end{tabular} \end{center}\end{figure}

The result of all four motion primitives in succession from $ {q_{I}}= (0,0,0)$ is shown in Figure 15.16. It is fun to try this at home with an axle and two wheels (Tinkertoys work well, for example). The result is that the differential drive moves sideways!15.9From the transition equation (15.54) such motions appear impossible. This is a beautiful property of nonlinear systems. The state may wiggle its way in directions that do not seem possible. A more familiar example is parallel parking a car. It is known that a car cannot directly move sideways; however, some wiggling motions can be performed to move it sideways into a tight parking space. The actions we perform while parking resemble the primitives in (15.71).

Algebraically, the motions of (15.71) appear to be checking for commutativity. Recall from Section 4.2.1 that a group $ G$ is called commutative (or Abelian) if $ ab = ba$ for any $ a, b \in G$. A commutator is a group element of the form $ aba^{-1}b^{-1}$. If the group is commutative, then $ aba^{-1}b^{-1} = e$ (the identity element) for any $ a, b \in G$. If a nonidentity element of $ G$ is produced by the commutator, then the group is not commutative. Similarly, if the trajectory arising from (15.71) does not form a loop (by returning to the starting point), then the motion primitives do not commute. Therefore, a sequence of motion primitives in (15.71) will be referred to as the commutator motion. It will turn out that if the commutator motion cannot produce any velocities not allowed by the system distribution, then the system is completely integrable. This means that if we are trapped on a surface, then it is impossible to leave the surface by using commutator motions.

Now generalize the differential drive to any driftless control-affine system that has two action variables:

$\displaystyle {\dot x}= f(x) u_1 +g(x) u_2 .$(15.72)

Using the notation of (15.53), the vector fields would be $ h_1$ and $ h_2$; however, $ f$ and $ g$ are chosen here to allow subscripts to denote the components of the vector field in the coming explanation.
Figure 15.17: The velocity obtained by the Lie bracket can be approximated by a sequence of four motion primitives.

Suppose that the commutator motion (15.71) is applied to (15.72) as shown in Figure 15.17. Determining the resulting motion requires some general computations, as opposed to the simple geometric arguments that could be made for the differential drive. If would be convenient to have an expression for the velocity obtained in the limit as $ \Delta t$ approaches zero. This can be obtained by using Taylor series arguments. These are simplified by the fact that the action history is piecewise constant.

The coming derivation will require an expression for $ {\ddot x}$ under the application of a constant action. For each action, a vector field of the form $ {\dot x}= h(x)$ is obtained. Upon differentiation, this yields

$\displaystyle {\ddot x}= \frac{dh}{dt} = \frac{\partial h}{\partial x} \frac{dx}{dt} = \frac{\partial h}{dx} {\dot x} = \frac{\partial h}{dx} h(x).$(15.73)

This follows from the chain rule because $ h$ is a function of $ x$, which itself is a function of $ t$. The derivative $ \partial h/\partial x$ is actually an $ n \times n$ Jacobian matrix, which is multiplied by the vector $ {\dot x}$. To further clarify (15.73), each component can be expressed as
$\displaystyle {\ddot x}_i = \frac{d}{dt} h_i(x(t)) = \sum_{j=1}^n \frac{\partial h_i}{\partial x_j} h_j .$(15.74)

Now the state trajectory under the application of (15.71) will be determined using the Taylor series, which was given in (14.17). The state trajectory that results from the first motion primitive $ u = (1,0)$ can be expressed as

\begin{displaymath}\begin{split}x({\Delta t}) & = x(0) + {\Delta t}\; {\dot x}(0... ...}{\partial x}\Big\vert_{x(0)} \; f(x(0)) + \cdots , \end{split}\end{displaymath}(15.75)

which makes use of (15.73) in the second line. The Taylor series expansion for the second primitive is
$\displaystyle x(2 {\Delta t}) = x({\Delta t}) + {\Delta t}\; g(x({\Delta t})) +... ...partial g}{\partial x} \Big\vert_{x({\Delta t})} \; g(x({\Delta t})) + \cdots .$(15.76)

An expression for $ g(x({\Delta t}))$ can be obtained by using the Taylor series expansion in (15.75) to express $ x(\Delta t)$. The first terms after substitution and simplification are
$\displaystyle x(2 {\Delta t}) = x(0) + {\Delta t}\; (f + g) + ({\Delta t})^2 \l... ...{\partial x} f + \frac{1}{2} \frac{\partial g}{\partial x} g \right) + \cdots .$(15.77)

To simplify the expression, the evaluation at $ x(0)$ has been dropped from every occurrence of $ f$ and $ g$ and their derivatives.

The idea of substituting previous Taylor series expansions as they are needed can be repeated for the remaining two motion primitives. The Taylor series expansion for the result after the third primitive is

$\displaystyle x(3 {\Delta t}) = x(0) + {\Delta t}\; g + ({\Delta t})^2 \left( \... ...}{\partial x} g + \frac{1}{2}\frac{\partial g}{\partial x} g \right) + \cdots .$(15.78)

Finally, the Taylor series expansion after all four primitives have been applied is
$\displaystyle x(4 {\Delta t}) = x(0) + ({\Delta t})^2 \left( \frac{\partial g}{\partial x} f - \frac{\partial f}{\partial x} g \right) + \cdots .$(15.79)

Taking the limit yields
$\displaystyle \lim_{\Delta t \rightarrow 0} \frac{x(4 {\Delta t}) - x(0)}{({\Delta t})^2} = \frac{\partial g}{\partial x} f - \frac{\partial f}{\partial x} g ,$(15.80)

which is called the Lie bracket of $ f$ and $ g$ and is denoted by $ [f,g]$. Similar to (15.74), the $ i$th component can be expressed as
$\displaystyle [f,g]_i = \sum_{j=1}^{n} \left( f_j \frac{\partial g_i}{\partial x_j} - g_j \frac{\partial f_i}{\partial x_j} \right) .$(15.81)

The Lie bracket is an important operation in many subjects, and is related to the Poisson and Jacobi brackets that arise in physics and mathematics.

Example 15..9 (Lie Bracket for the Differential Drive)   The Lie bracket should indicate that sideways motions are possible for the differential drive. Consider taking the Lie bracket of the two vector fields used in (15.54). Let $ f = [ \cos\theta \;\; \sin\theta \;\; 0]^T$ and$ g = [0 \;\; 0 \;\; 1]^T$. Rename $ h_1$ and $ h_2$ to $ f$ and $ g$ to allow subscripts to denote the components of a vector field.

By applying (15.81), the Lie bracket $ [f,g]$ is

\begin{displaymath}\begin{split}[f,g]_1 & = f_1 \frac{\partial g_1}{\partial x} ... ...a} - g_3 \frac{\partial f_3}{\partial \theta} = 0 . \end{split}\end{displaymath}(15.82)

The resulting vector field is $ [f,g] = [ \sin\theta \;\; -\cos\theta \;\; 0]^T$, which indicates the sideways motion, as desired. When evaluated at $ q = (0,0,0)$, the vector $ [0\;\;-1\;\;0]^T$ is obtained. This means that performing short commutator motions wiggles the differential drive sideways in the $ -y$ direction, which we already knew from Figure 15.16$ \blacksquare$ 

Example 15..10 (Lie Bracket of Linear Vector Fields)   Suppose that each vector field is a linear function of $ x$. The $ n \times n$ Jacobians $ \partial f/\partial x$ and $ \partial g/\partial x$ are constant.

As a simple example, recall the nonholonomic integrator (13.43). In the linear-algebra form, the system is

$\displaystyle \begin{pmatrix}{\dot x}_1  {\dot x}_2  {\dot x}_3 \end{pmatri... ... -x_2  \end{pmatrix} u_1 + \begin{pmatrix}0  1  x_1  \end{pmatrix} u_2.$(15.83)

Let $ f = h_1$ and $ g = h_2$. The Jacobian matrices are
$\displaystyle \frac{\partial f}{\partial x} = \begin{pmatrix}0 & 0 & 0  0 & 0 & 0  0 & -1 & 0 \end{pmatrix}$$\displaystyle \mbox {\;\;\; and \;\;\;} \frac{\partial g}{\partial x} = \begin{pmatrix}0 & 0 & 0   0 & 0 & 0   1 & 0 & 0 \end{pmatrix} .$(15.84)

Using (15.80),
$\displaystyle \small \frac{\partial g}{\partial x} f - \frac{\partial f}{\parti... ...0  1  -x_1  \end{pmatrix} = \begin{pmatrix}0  0  2  \end{pmatrix} .$(15.85)

This result can be verified using (15.81).

$ \blacksquare$ 

Steven M LaValle 2012-04-20

1-3 of 3