Cleveland State University
Department of Electrical Engineering and Computer Science
EEC 644/744, Optimal Control Systems
- Homework problems that are labeled "Kirk" are taken from the book Optimal Control Theory, by Donald Kirk.
- The www.turnitin.com class id is 10248786, and the password is "optimal".
Wed. August 26
1. Problem 1.2 in Kirk
2. Matlab: Simulate the system of Problem 1.2. Plot the two states. Also plot the analytical solutions that you derived. Compare the analytical results with the simulation results. Hand in your Matlab code.
3. In class: Linear systems quiz
Fri. August 28
Last day to drop with full refund
Wed. September 2
Suppose that Q is a non-symmetric matrix, S is the
symmetric part of Q, and x is an arbitrary vector. Prove the following:
x^T * Q * x = x^T * S * x.
2. Show the instructor a copy of Kirk's book with your name written in the front.
3. Prove the four properties in the left column of Table 1-1 in Kirk's book.
4. Find the state transition matrices for the systems of Problem 1-12(a), (b), and (i) in Kirk's book.
Wed. September 9
1. For the parameterized cart control problem that we discussed in class with u=k1+k2*t, create a table using pencil-and-paper calculations (with some help from a calculator or Matlab) that shows x(tf) and the integral of the square of the control, for the following values of q/r: 100, 1, 1/100, and 1/1000.
2. Verify your answers to question 1 using a simulation of the cart control problem.
Wed. September 16
1. Kirk problem 3.10
2. Simulate the nonlinear inverted pendulum with a discrete-time LQR. You will need to linearize and discretize the system in order to derive the LQR. Use the system parameters that are in Pendulum.m on the course web set. Use an identity matrix for P(0) and Q.
a. Plot the states between 0 and 6 seconds, and give the magnitude of the largest closed-loop eigenvalue, for R = 0.1, 1, and 10. Explain how and why the value of R affects the system response and the closed-loop eigenvalues.
b. For R = 1, plot the elements of the state feedback matrix as a function of time.
c. Use the steady-state state feedback matrix (for R = 1) in your controller. How does system performance change compared to time-varying feedback control?
Wed. September 23
1. Kirk problem 3.23. Don't try to solve it analytically - just solve for K numerically for both parts (a) and (b). Plot K for both parts (a) and (b) and compare the solutions.
2. Consider the system xdot = x + u and the cost function J = x^2(tf) + integral (r u^2) dt.
a. Use the HJB equations to analytically solve for the optimal control. Hint: Assume that J* = s(t)[x(t)]^2, where s(t) is a function to be determined. This should eventually lead you to the equation sdot = s^2/r - 2s. This can be written as ds / (s^2/r - 2s) = dt. If you know s(tf), you can integrate both sides and analytically solve for s(t).
b. Given x(0) = 5, plot x(t) for a couple of different values of r to show how the value of r affects the state trajectory.
c. Find the analytical solution to the steady-state controller as a function of r.
Wed. September 30
1. State the fundamental theorem of the calculus of variations.
2. Kirk problem 4.2. Hint: Use proof by contradiction. Show that if h(t) is nonzero, then there exists some continuous delta-x that gives a nonzero integral.
3. Find the extremals of the following functionals.
a. The integral from a to b of [(dy/dx)^2 / x^3] dx
b. The integral from a to b of [y^2 + (dy/dx)^2 + 2*y*exp(x)] dx
4. Plot the solution to the brachistochrone problem for the following final conditions on (x, y): (1, 1), (1, 5), and (5, 1). When you create your plots, use an aspect ratio of 1:1 so that your plot is not skewed; that is, one unit of distance in the x-direction should be equivalent to one unit of distance in the y-direction. Hint: You can use Matlab's fzero function to solve for theta at the final time. Then as x increments from 0 to its final value, you can use fzero at each increment to solve for theta, and given theta, you can solve for y. Hand in your plots and your Matlab code.
Wed. October 7
As preparation for your term
project, write a project proposal describing a nonlinear system to which
you would like to apply the optimal control methods studied in this class.
Include the equations that describe the dynamics of the system. Provide a
timeline that divides the term project into individual tasks. Describe what
tasks you need to accomplish, and when they will be done. Write your proposal
in a formal way with correct formatting, referencing, etc. Your proposal should
be between 5 and 10 pages. Submit your proposal to the "Proposal"
assignment at www.turnitin.com by 11:59
PM - the class id is 10248786 and the password is "optimal". See the
following references for general advice and guidelines related to technical
Wed. October 14
Kirk problems 5.16, 5.17(a), and 5.23. For problem 5.23(d), note that tf and y(tf) are the only constraints at the final time.
Wed. October 21
Consider a 1-ohm resistor and 1-henry inductor in series with a voltage source. The control input of the system is the voltage source, and the state is the current. We want to minimize q[x(1)-2]^2 + integral [u^2]. That is, we want to drive the final current to 2 amps at the final time, which is 1 second, and we want to minimize a combination of the final current error and the control usage. The parameter q defines the penalty on the final state error relative to the control usage. The initial current through the circuit is zero.
1. Write the differential equation that describes the circuit.
2. Use the state transition matrix to write the state as a function of t and u.
3. Write the Hamiltonian.
4. Write the Euler-Lagrange equations.
5. Solve the Euler-Lagrange equations for u as a function of t, q, and x(1).
6. Substitute u(t) from part (5) into the expression for the state from part (2) to find x(1) as a function of q.
7. Find u as a function of t and q.
8. Find x as a function of t and q.
9. Use Matlab to simulate the RL circuit using the optimal u(t). Hand in your Matlab code, and plots of u(t) and x(t) for q=1, q=10, and q=100.
Mon. October 26
Wed. October 28
1. Kirk problem 5.24(a)
2. Kirk problem 5.25
a. For part (a), find the eigenvalues of the A matrix.
b. For part (b), first find the optimal control in terms of the costate. Then suppose that the optimal control has a magnitude that is not equal to 1, and find the corresponding costate value. Then find the corresponding Hamiltonian value. Then show that this Hamiltonian value violates the optimality conditions.
c. For part (c), first find the general form for the optimal costate. Then show that this general form could change sign an unlimited number of times, and that this fact implies that the optimal control could also change sign an unlimited number of times.
Wed. November 4
1. Hand in an updated progress report for your term project (due 11:59 PM at turnitin.com).
2. Come to class prepared to give a 10-minute presentation on the status of your project.
Mon. November 9
1. Kirk problem 6.34 - hand in your source code, plots showing convergence of gradient descent, and plots of optimal control as a function of time
Mon. November 23
1. We found E(J) for a stochastic LQR problem with cost function J. Define an LQR problem (maybe an RC circuit, or a two-state Newtonian system, or a linearized version of your project problem). Implement stochastic LQR control. Run your system many times (maybe 100 times or so) and verify that the numerical value for E(J) matches the analytical expression for E(J).
November 30 & December 2
Oral project reports, 10 minutes per student
Comprehensive final exam
Written term project due at www.turnitin.com
Last Revised: December 1, 2015