Introduction: Definition of the control problem (regulation and tracking).
Possible configurations for solution of the control problem (cascade
compensation, feedforward control). Specifications and performance criteria (in
time and frequency domain). Feedback theory: Feedback for improvement of
stability properties, feedback as suppressor of external disturbances, feedback
as protector against parameter variations, effect of measurement noise.
Sensitivity functions at zero frequency and other frequencies.
Linearization and linear control: Linearization around trajectories and equilibrium point.
Stability and local control (including physical examples).
Graphical techniques for control system analysis: Use of the Nyquist.
Criterion, Bode plots, Root locus plots, output-feedback and compensation networks.
Design techniques of control systems using state equations: Design of control
systems by use of state equations, pole shifting, design of state observer and
observer pole placement, closed loops with observer and pole shifting controller in loop.
State space extension for inclusion of pure integrators in loop.
Introduction to quadratic criteria as a tool for design of state feedback and