An Introduction to Optimal Control
Lecture notes from the FLOW-NORDITA Summer School on Advanced Instability Methods for Complex Flows, Stockholm, Sweden, 2013

[+] Author and Article Information
Carlo Cossu

Directeur de Recherche CNRS,
Institut de Mécanique des Fluides de Toulouse,
CNRS—Université de Toulouse,
Allée Camille Soula,
Toulouse 31400, France
e-mail: carlo.cossu@imft.fr

In the following, the notation J/q is used to denote the gradient of the function qJ. Similarly, q/g is used to denote the Jacobian of the function gq.

We consider a solution strategy where it is required that the state equation is exactly satisfied, not only by the optimal solution, but also during any iterate needed to reach it. However, the count is different if other strategies are used where it is not required that F=0 during iterations.

The derivative of the cost function J(q,g) with respect to q and g considered as independent variables will be denoted by J/q and J/g, respectively. However, the cost function can also be considered as a composed function of g alone J[q(g),g], where q(g) is obtained from F(q,g)=0. In this case, the derivative of the cost function with respect to g is labeled total derivative DJ/Dg and can be obtained with the chain rule as


This condition of course does not apply when the constrained and the unconstrained optimals coincide. In that case, the path goes through the unconstrained optimal. Formally, one can think that in this case the path is tangent to the level curve of zero diameter.

Using the usual dot product notation a·F=aiFi for the inner product of vectors a,F.

The optimization procedure considered here should be considered as purely pedagogical. Using the standard definition of the norm of linear operators R=-L-12 and the considered problem is therefore one of computation of vector-induced L2 matrix norm. For matrix norm computations, other highly efficient algorithms already exists, that are, e.g., coded into the norm() functions in matlab, octave, and SciLab and are used as comparison in the exercises below.

This latter technique is called “check-pointing” and is widely used (see, e.g., Ref. [10]).

It is possible to also tune the weight to each of the components of the deviation from the target and of the cost of the control defining cost functions of the type (q(T)-p)·Qq(q(T)-p), g·Qgg with symmetric definite positive Qs.

Replacing the optimality condition in the state equation a linear system is found in the φ variable with components φ={ga}, whose solution is easily found in terms of the initial conditions q0 and a0. The initial condition on the costate a0 can then be expressed in terms of q0 and p, by using a(T)=q(T)-p. This finally gives the complete solution for the state and the costate. The control can then be easily found using the optimality condition.

This example is taken from Ref [7], p. 77.

Manuscript received June 29, 2013; final manuscript received October 31, 2013; published online March 19, 2014. Assoc. Editor: Ardeshir Hanifi.

Appl. Mech. Rev 66(2), 024801 (Mar 19, 2014) (15 pages) Paper No: AMR-13-1046; doi: 10.1115/1.4026482 History: Received June 29, 2013; Revised October 31, 2013

The goal of these lecture notes is to provide an informal introduction to the use of variational techniques for solving constrained optimization problems with equality constraints and full state information. The use of the Lagrangian augmented cost function and variational techniques by which the adjoint equation and the optimality condition are found are introduced by the use of examples starting from steady finite-dimensional problems to end with unsteady initial-boundary value problems. Gradient methods based on sensitivity and adjoint equation solutions are also mentioned.

Copyright © 2014 by ASME
Your Session has timed out. Please sign back in to continue.



Grahic Jump Location
Fig. 1

Example of constrained optimization in the control-state plane. The level-sets of the cost function J(q,g) are the concentric circles (dotted line) with outward increasing values. Solutions satisfying the constraint F(q,g) = 0 lie on the straight (solid) line. The constrained minimum (small filled circle) is attained in a point where the constraint curve is tangent to one of the level sets of J.

Grahic Jump Location
Fig. 2

Example of optimal control to reach target state p=1 at T=6 for different values of the control cost parameter γ2. Top panel: optimal control laws g(t), middle panel optimal state evolution q(t). Bottom panel dependence of the final state q(T) on the control cost parameter γ2.

Grahic Jump Location
Fig. 3

Example of Riccati-based feedback control of a non-normal linear stable system supporting transient energy growths. The optimal transient growth G(T) of the uncontrolled system (solid line, black) is compared to the controlled cases Riccati-based feedback for decreasing values of the cost parameter γ. For large values of γ the optimal transient growth is reduced and it is completely suppressed when γ is lowered enough. For, e.g., γ = 10, Gmax = 1.




Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging and repositioning the boxes below.

Related Journal Articles
Related eBook Content
Topic Collections

Sorry! You do not have access to this content. For assistance or to subscribe, please contact us:

  • TELEPHONE: 1-800-843-2763 (Toll-free in the USA)
  • EMAIL: asmedigitalcollection@asme.org
Sign In