Skip to content

Commit

Permalink
Merge pull request #90 from JuliaControl/current_form
Browse files Browse the repository at this point in the history
added: support for the current form for all the state estimators
  • Loading branch information
franckgaga authored Aug 8, 2024
2 parents 092cf30 + 15f270a commit d10a731
Show file tree
Hide file tree
Showing 22 changed files with 837 additions and 416 deletions.
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "ModelPredictiveControl"
uuid = "61f9bdb8-6ae4-484a-811f-bbf86720c31c"
authors = ["Francis Gagnon"]
version = "0.22.1"
version = "0.23.0"

[deps]
ControlSystemsBase = "aaaaaaaa-a6ca-5380-bf3e-84a91bcd477e"
Expand Down
2 changes: 1 addition & 1 deletion docs/src/internals/predictive_control.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ ModelPredictiveControl.init_matconstraint_mpc
## Update Quadratic Optimization

```@docs
ModelPredictiveControl.initpred!(::PredictiveController, ::LinModel, ::Any, ::Any, ::Any, ::Any, ::Any)
ModelPredictiveControl.initpred!(::PredictiveController, ::LinModel, ::Any, ::Any, ::Any, ::Any)
ModelPredictiveControl.linconstraint!(::PredictiveController, ::LinModel)
```

Expand Down
6 changes: 6 additions & 0 deletions docs/src/internals/state_estim.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,12 @@ ModelPredictiveControl.remove_op!
ModelPredictiveControl.init_estimate!
```

## Correct Estimate

```@docs
ModelPredictiveControl.correct_estimate!
```

## Update Estimate

!!! info
Expand Down
36 changes: 21 additions & 15 deletions docs/src/manual/linmpc.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,18 +62,18 @@ plant simulator to test the design. Its sampling time is 2 s thus the control pe
## Linear Model Predictive Controller

A linear model predictive controller (MPC) will control both the water level ``y_L`` and
temperature ``y_T`` in the tank. The tank level should also never fall below 45:
temperature ``y_T`` in the tank. The tank level should also never fall below 48:

```math
y_L ≥ 45
y_L ≥ 48
```

We design our [`LinMPC`](@ref) controllers by including the linear level constraint with
[`setconstraint!`](@ref) (`±Inf` values should be used when there is no bound):

```@example 1
mpc = LinMPC(model, Hp=10, Hc=2, Mwt=[1, 1], Nwt=[0.1, 0.1])
mpc = setconstraint!(mpc, ymin=[45, -Inf])
mpc = setconstraint!(mpc, ymin=[48, -Inf])
```

in which `Hp` and `Hc` keyword arguments are respectively the predictive and control
Expand Down Expand Up @@ -116,10 +116,11 @@ function test_mpc(mpc, model)
i == 101 && (ry = [54, 30])
i == 151 && (ul = -20)
y = model() # simulated measurements
preparestate!(mpc, y) # prepare mpc state estimate for current iteration
u = mpc(ry) # or equivalently : u = moveinput!(mpc, ry)
u_data[:,i], y_data[:,i], ry_data[:,i] = u, y, ry
updatestate!(mpc, u, y) # update mpc state estimate
updatestate!(model, u + [0; ul]) # update simulator with the load disturbance
updatestate!(mpc, u, y) # update mpc state estimate for next iteration
updatestate!(model, u + [0; ul]) # update simulator with load disturbance
end
return u_data, y_data, ry_data
end
Expand All @@ -131,10 +132,10 @@ nothing # hide
The [`LinMPC`](@ref) objects are also callable as an alternative syntax for
[`moveinput!`](@ref). It is worth mentioning that additional information like the optimal
output predictions ``\mathbf{Ŷ}`` can be retrieved by calling [`getinfo`](@ref) after
solving the problem. Also, calling [`updatestate!`](@ref) on the `mpc` object updates its
internal state for the *NEXT* control period (this is by design, see
[Functions: State Estimators](@ref) for justifications). That is why the call is done at the
end of the `for` loop. The same logic applies for `model`.
solving the problem. Also, calling [`preparestate!`](@ref) on the `mpc` object prepares the
estimates for the current control period, and [`updatestate!`](@ref) updates them for the
next one (the same logic applies for `model`). This is why [`preparestate!`](@ref) is called
before the controller, and [`updatestate!`](@ref), after.

Lastly, we plot the closed-loop test with the `Plots` package:

Expand All @@ -143,7 +144,7 @@ using Plots
function plot_data(t_data, u_data, y_data, ry_data)
p1 = plot(t_data, y_data[1,:], label="meas.", ylabel="level")
plot!(p1, t_data, ry_data[1,:], label="setpoint", linestyle=:dash, linetype=:steppost)
plot!(p1, t_data, fill(45,size(t_data)), label="min", linestyle=:dot, linewidth=1.5)
plot!(p1, t_data, fill(48,size(t_data)), label="min", linestyle=:dot, linewidth=1.5)
p2 = plot(t_data, y_data[2,:], label="meas.", legend=:topleft, ylabel="temp.")
plot!(p2, t_data, ry_data[2,:],label="setpoint", linestyle=:dash, linetype=:steppost)
p3 = plot(t_data,u_data[1,:],label="cold", linetype=:steppost, ylabel="flow rate")
Expand All @@ -162,11 +163,11 @@ real-life control problems. Constructing a [`LinMPC`](@ref) with input integrato

```@example 1
mpc2 = LinMPC(model, Hp=10, Hc=2, Mwt=[1, 1], Nwt=[0.1, 0.1], nint_u=[1, 1])
mpc2 = setconstraint!(mpc2, ymin=[45, -Inf])
mpc2 = setconstraint!(mpc2, ymin=[48, -Inf])
```

does accelerate the rejection of the load disturbance and eliminates the level constraint
violation:
does accelerate the rejection of the load disturbance and almost eliminates the level
constraint violation:

```@example 1
setstate!(model, zeros(model.nx))
Expand Down Expand Up @@ -202,7 +203,7 @@ mpc_mhe = LinMPC(estim, Hp=10, Hc=2, Mwt=[1, 1], Nwt=[0.1, 0.1])
mpc_mhe = setconstraint!(mpc_mhe, ymin=[45, -Inf])
```

The rejection is slightly improved:
The rejection is not improved here:

```@example 1
setstate!(model, zeros(model.nx))
Expand All @@ -215,6 +216,10 @@ savefig("plot3_LinMPC.svg"); nothing # hide

![plot3_LinMPC](plot3_LinMPC.svg)

This is because the more performant `direct=true` version of the [`MovingHorizonEstimator`](@ref)
is not not implemented yet. The rejection will be improved with the `direct=true` version
(coming soon).

## Adding Feedforward Compensation

Suppose that the load disturbance ``u_l`` of the last section is in fact caused by a
Expand Down Expand Up @@ -246,7 +251,7 @@ A [`LinMPC`](@ref) controller is constructed on this model:

```@example 1
mpc_d = LinMPC(model_d, Hp=10, Hc=2, Mwt=[1, 1], Nwt=[0.1, 0.1])
mpc_d = setconstraint!(mpc_d, ymin=[45, -Inf])
mpc_d = setconstraint!(mpc_d, ymin=[48, -Inf])
```

A new test function that feeds the measured disturbance ``\mathbf{d}`` to the controller is
Expand All @@ -264,6 +269,7 @@ function test_mpc_d(mpc_d, model)
i == 151 && (ul = -20)
d = ul .+ dop # simulated measured disturbance
y = model() # simulated measurements
preparestate!(mpc_d, y, d) # prepare estimate with the measured disturbance d
u = mpc_d(ry, d) # also feed the measured disturbance d to the controller
u_data[:,i], y_data[:,i], ry_data[:,i] = u, y, ry
updatestate!(mpc_d, u, y, d) # update estimate with the measured disturbance d
Expand Down
24 changes: 14 additions & 10 deletions docs/src/manual/nonlinmpc.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,11 +89,11 @@ method.
An [`UnscentedKalmanFilter`](@ref) estimates the plant state :

```@example 1
α=0.01; σQ=[0.1, 0.5]; σR=[0.5]; nint_u=[1]; σQint_u=[0.1]
α=0.01; σQ=[0.1, 1.0]; σR=[5.0]; nint_u=[1]; σQint_u=[0.1]
estim = UnscentedKalmanFilter(model; α, σQ, σR, nint_u, σQint_u)
```

The vectors `σQ` and σR `σR` are the standard deviations of the process and sensor noises,
The vectors `σQ` and `σR` are the standard deviations of the process and sensor noises,
respectively. The value for the velocity ``ω`` is higher here (`σQ` second value) since
``\dot{ω}(t)`` equation includes an uncertain parameter: the friction coefficient ``K``.
Also, the argument `nint_u` explicitly adds one integrating state at the model input, the
Expand Down Expand Up @@ -237,7 +237,8 @@ savefig("plot6_NonLinMPC.svg"); nothing # hide

![plot6_NonLinMPC](plot6_NonLinMPC.svg)

the new controller is able to recuperate more energy from the pendulum (i.e. negative work):
the new controller is able to recuperate a little more energy from the pendulum (i.e.
negative work):

```@example 1
Dict(:W_nmpc => calcW(res_yd), :W_empc => calcW(res2_yd))
Expand All @@ -262,12 +263,13 @@ linmodel = linearize(model, x=[π, 0], u=[0])
A [`SteadyKalmanFilter`](@ref) and a [`LinMPC`](@ref) are designed from `linmodel`:

```@example 1
skf = SteadyKalmanFilter(linmodel; σQ, σR, nint_u, σQint_u)
mpc = LinMPC(skf; Hp, Hc, Mwt, Nwt, Cwt=Inf)
mpc = setconstraint!(mpc, umin=[-1.5], umax=[+1.5])
```

The linear controller has difficulties to reject the 10° step disturbance:
The linear controller satisfactorily rejects the 10° step disturbance:

```@example 1
res_lin = sim!(mpc, N, [180.0]; plant, x_0=[π, 0], y_step=[10])
Expand All @@ -278,9 +280,10 @@ savefig("plot7_NonLinMPC.svg"); nothing # hide
![plot7_NonLinMPC](plot7_NonLinMPC.svg)

Solving the optimization problem of `mpc` with [`DAQP`](https://darnstrom.github.io/daqp/)
optimizer instead of the default `OSQP` solver can help here. It is indeed documented that
`DAQP` can perform better on small/medium dense matrices and unstable poles[^1], which is
obviously the case here (absolute value of unstable poles are greater than one):
optimizer instead of the default `OSQP` solver can improve the performance here. It is
indeed documented that `DAQP` can perform better on small/medium dense matrices and unstable
poles[^1], which is obviously the case here (absolute value of unstable poles are greater
than one):

[^1]: Arnström, D., Bemporad, A., and Axehill, D. (2022). A dual active-set solver for
embedded quadratic programming using recursive LDLᵀ updates. IEEE Trans. Autom. Contr.,
Expand All @@ -305,7 +308,7 @@ mpc2 = LinMPC(skf; Hp, Hc, Mwt, Nwt, Cwt=Inf, optim=daqp)
mpc2 = setconstraint!(mpc2; umin, umax)
```

does improve the rejection of the step disturbance:
does slightly improve the rejection of the step disturbance:

```@example 1
res_lin2 = sim!(mpc2, N, [180.0]; plant, x_0=[π, 0], y_step=[10])
Expand Down Expand Up @@ -359,12 +362,13 @@ function sim_adapt!(mpc, nonlinmodel, N, ry, plant, x_0, x̂_0, y_step=[0])
x̂ = x̂_0
for i = 1:N
y = plant() + y_step
x̂ = preparestate!(mpc, y)
u = moveinput!(mpc, ry)
linmodel = linearize(nonlinmodel; u, x=x̂[1:2])
setmodel!(mpc, linmodel)
U_data[:,i], Y_data[:,i], Ry_data[:,i] = u, y, ry
x̂ = updatestate!(mpc, u, y) # update mpc state estimate
updatestate!(plant, u) # update plant simulator
updatestate!(mpc, u, y) # update mpc state estimate
updatestate!(plant, u) # update plant simulator
end
res = SimResult(mpc, U_data, Y_data; Ry_data)
return res
Expand Down
6 changes: 6 additions & 0 deletions docs/src/public/generic_func.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,12 @@ setconstraint!
evaloutput
```

## Prepare State x

```@docs
preparestate!
```

## Update State x

```@docs
Expand Down
22 changes: 9 additions & 13 deletions docs/src/public/state_estim.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,19 +17,15 @@ error with closed-loop control (offset-free tracking).
integral action is not necessarily desired. The options `nint_u=0` and `nint_ym=0`
disable it.

The estimators are all implemented in the predictor form (a.k.a. observer form), that is,
they all estimates at each discrete time ``k`` the states of the next period
``\mathbf{x̂}_k(k+1)``[^1]. In contrast, the filter form that estimates ``\mathbf{x̂}_k(k)``
is sometimes slightly more accurate.

[^1]: also denoted ``\mathbf{x̂}_{k+1|k}`` [elsewhere](https://en.wikipedia.org/wiki/Kalman_filter).

The predictor form comes in handy for control applications since the estimations come after
the controller computations, without introducing any additional delays. Moreover, the
[`moveinput!`](@ref) method of the predictive controllers does not automatically update the
estimates with [`updatestate!`](@ref). This allows applying the calculated inputs on the
real plant before starting the potentially expensive estimator computations (see
[Manual](@ref man_lin) for examples).
The estimators are all implemented in the current form (a.k.a. as filter form) by default
to improve accuracy and robustness, that is, they all estimates at each discrete time ``k``
the states of the current period ``\mathbf{x̂}_k(k)``[^1] (using the newest measurements, see
[Manual](@ref man_lin) for examples). The predictor form (a.k.a. delayed form) is also
available with the option `direct=false`. This allow moving the estimator computations after
solving the MPC problem with [`moveinput!`](@ref), for when the estimations are expensive
(for instance, with the [`MovingHorizonEstimator`](@ref)).

[^1]: also denoted ``\mathbf{x̂}_{k|k}`` [elsewhere](https://en.wikipedia.org/wiki/Kalman_filter).

!!! info
All the estimators support measured ``\mathbf{y^m}`` and unmeasured ``\mathbf{y^u}``
Expand Down
2 changes: 1 addition & 1 deletion src/ModelPredictiveControl.jl
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ import OSQP, Ipopt
export SimModel, LinModel, NonLinModel
export DiffSolver, RungeKutta
export setop!, setname!
export setstate!, setmodel!, updatestate!, evaloutput, linearize, linearize!
export setstate!, setmodel!, preparestate!, updatestate!, evaloutput, linearize, linearize!
export StateEstimator, InternalModel, Luenberger
export SteadyKalmanFilter, KalmanFilter, UnscentedKalmanFilter, ExtendedKalmanFilter
export MovingHorizonEstimator
Expand Down
28 changes: 15 additions & 13 deletions src/controller/construct.jl
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ The predictive controllers support both soft and hard constraints, defined by:
\mathbf{u_{min} - c_{u_{min}}} ϵ ≤&&\ \mathbf{u}(k+j) &≤ \mathbf{u_{max} + c_{u_{max}}} ϵ &&\qquad j = 0, 1 ,..., H_p - 1 \\
\mathbf{Δu_{min} - c_{Δu_{min}}} ϵ ≤&&\ \mathbf{Δu}(k+j) &≤ \mathbf{Δu_{max} + c_{Δu_{max}}} ϵ &&\qquad j = 0, 1 ,..., H_c - 1 \\
\mathbf{y_{min} - c_{y_{min}}} ϵ ≤&&\ \mathbf{ŷ}(k+j) &≤ \mathbf{y_{max} + c_{y_{max}}} ϵ &&\qquad j = 1, 2 ,..., H_p \\
\mathbf{x̂_{min} - c_{x̂_{min}}} ϵ ≤&&\ \mathbf{x̂}_{k-1}(k+j) &≤ \mathbf{x̂_{max} + c_{x̂_{max}}} ϵ &&\qquad j = H_p
\mathbf{x̂_{min} - c_{x̂_{min}}} ϵ ≤&&\ \mathbf{x̂}_{i}(k+j) &≤ \mathbf{x̂_{max} + c_{x̂_{max}}} ϵ &&\qquad j = H_p
\end{alignat*}
```
and also ``ϵ ≥ 0``. The last line is the terminal constraints applied on the states at the
Expand Down Expand Up @@ -94,8 +94,10 @@ LinMPC controller with a sample time Ts = 4.0 s, OSQP optimizer, SteadyKalmanFil
Terminal constraints provide closed-loop stability guarantees on the nominal plant model.
They can render an unfeasible problem however. In practice, a sufficiently large
prediction horizon ``H_p`` without terminal constraints is typically enough for
stability. Note that terminal constraints are applied on the augmented state vector
``\mathbf{x̂}`` (see [`SteadyKalmanFilter`](@ref) for details on augmentation).
stability. If `mpc.estim.direct==true`, the state is estimated at ``i = k`` (the current
time step), otherwise at ``i = k - 1``. Note that terminal constraints are applied on the
augmented state vector ``\mathbf{x̂}`` (see [`SteadyKalmanFilter`](@ref) for details on
augmentation).
For variable constraints, the bounds can be modified after calling [`moveinput!`](@ref),
that is, at runtime, but not the softness parameters ``\mathbf{c}``. It is not possible
Expand Down Expand Up @@ -433,16 +435,16 @@ The model predictions are evaluated from the deviation vectors (see [`setop!`](@
&= \mathbf{E ΔU} + \mathbf{F}
\end{aligned}
```
in which ``\mathbf{x̂_0}(k) = \mathbf{x̂}_{k-1}(k) - \mathbf{x̂_{op}}`` and
``\mathbf{x̂}_{k-1}(k)`` is the state estimated at the last control period. The predicted
outputs ``\mathbf{Ŷ_0}`` and measured disturbances ``\mathbf{D̂_0}`` respectively include
``\mathbf{ŷ_0}(k+j)`` and ``\mathbf{d̂_0}(k+j)`` values with ``j=1`` to ``H_p``, and input
increments ``\mathbf{ΔU}``, ``\mathbf{Δu}(k+j)`` from ``j=0`` to ``H_c-1``. The vector
``\mathbf{B}`` contains the contribution for non-zero state ``\mathbf{x̂_{op}}`` and state
update ``\mathbf{f̂_{op}}`` operating points (for linearization at non-equilibrium point, see
[`linearize`](@ref)). The stochastic predictions ``\mathbf{Ŷ_s=0}`` if `estim` is not a
[`InternalModel`](@ref), see [`init_stochpred`](@ref). The method also computes similar
matrices for the predicted terminal states at ``k+H_p``:
in which ``\mathbf{x̂_0}(k) = \mathbf{x̂}_{i}(k) - \mathbf{x̂_{op}}``, with ``i = k`` if
`estim.direct==true`, otherwise ``i = k - 1``. The predicted outputs ``\mathbf{Ŷ_0}`` and
measured disturbances ``\mathbf{D̂_0}`` respectively include ``\mathbf{ŷ_0}(k+j)`` and
``\mathbf{d̂_0}(k+j)`` values with ``j=1`` to ``H_p``, and input increments ``\mathbf{ΔU}``,
``\mathbf{Δu}(k+j)`` from ``j=0`` to ``H_c-1``. The vector ``\mathbf{B}`` contains the
contribution for non-zero state ``\mathbf{x̂_{op}}`` and state update ``\mathbf{f̂_{op}}``
operating points (for linearization at non-equilibrium point, see [`linearize`](@ref)). The
stochastic predictions ``\mathbf{Ŷ_s=0}`` if `estim` is not a [`InternalModel`](@ref), see
[`init_stochpred`](@ref). The method also computes similar matrices for the predicted
terminal states at ``k+H_p``:
```math
\begin{aligned}
\mathbf{x̂_0}(k+H_p) &= \mathbf{e_x̂ ΔU} + \mathbf{g_x̂ d_0}(k) + \mathbf{j_x̂ D̂_0}
Expand Down
Loading

2 comments on commit d10a731

@franckgaga
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator register

Release notes:

BREAKING CHANGES

  • preparestate! should be called before solving the MPC optimization with moveinput!
  • current form by default for all StateEstimator
  • ym keyword argument of moveinput! removed (was only for InternalModel, I now use the new preparestate! method)
  • MovingHorizonEstimator advanced constructor (with complete covariance matrices): covestim and optim are now keyword arguments instead of positional arguments.

Changelog

  • added: current form for all StateEstimator except MovingHorizonEstimator, to improve accuracy and closed-loop robustness
  • reduce allocation for SimModeland StateEstimator instances by using internal buffers
  • multiple doc correction

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/112700

Tagging

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v0.23.0 -m "<description of version>" d10a731a53c3836d3dca25ec60326529eb07173b
git push origin v0.23.0

Please sign in to comment.