ssm package

Subpackages

Submodules

ssm.inference module

class ssm.inference.OnlineKalmanFilter(B, Q, m0, V0, Z, R)[source]

Bases: object

Class implementing the online Kalman filter algorithm for the following linear dynamical system:

\[ \begin{align}\begin{aligned}x_n &= B\ x_{n-1} + w_n,\ \textrm{where}\ w_n\sim\mathcal{N}(w_n|0, Q)\, \textrm{and}\ x_n\in\Re^M\\y_n &= Z\ x_{n-1} + v_n,\ \textrm{where}\ v_n\sim\mathcal{N}(v_n|0, R)\, \textrm{and}\ y_n\in\Re^N\\x_0&\in\mathcal{N}(x_0|m_0, V_0)\end{aligned}\end{align} \]

Example use:

online_kf = OnlineKalmanFilter(B, Q, m0, V0, Z, R)
x_pred, P_pred = online_kf.predict()

for y in ys:
    x_filt, P_filt = online_kf.update(y)
    x_pred, P_pred = online_kf.predict()

A script using OnlineKalmanFilter for tracking the position of a mouse can be found here

Note 1:

invocation so predict() and update(y) should alternate. That is, each invocation to update(y) should be preceded by an invocation to predict(), and each invocation to predict() (except the first one) should be preceded by an invoation to update(y).

Note 2:

observations \(y_n\) should be sampled uniformly.

property B
property P
property Q
property R
property V0
property Z
property m0
predict()[source]

Predicts the next state.

Returns:

(state, covariance): tuple containing the predicted state vector and covariance matrix.

update(y)[source]

Updates the current state and covariance.

Parameters:

y – observation \(\in\Re^M\)

Returns:

(state, covariance): tuple containing the updated state vector and covariance matrix.

property x
class ssm.inference.TimeVaryingOnlineKalmanFilter[source]

Bases: object

Class implementing the time-varying and online Kalman filter algorithm for the following linear dynamical system:

\[ \begin{align}\begin{aligned}x_n &= B_n\ x_{n-1} + w_n,\ \textrm{where}\ w_n\sim\mathcal{N}(w_n|0, Q_n)\, \textrm{and}\ x_n\in\Re^M\\y_n &= Z_n\ x_{n-1} + v_n,\ \textrm{where}\ v_n\sim\mathcal{N}(v_n|0, R_n)\, \textrm{and}\ y_n\in\Re^N\\x_0&\in\mathcal{N}(x_0|m_0, V_0)\end{aligned}\end{align} \]

Example use:

online_kf = OnlineKalmanFilter(m0, V0)
x_pred, P_pred = online_kf.predict()

for y in ys:
    x_filt, P_filt = online_kf.update(y)
    x_pred, P_pred = online_kf.predict()

A script using OnlineKalmanFilter for tracking the position of a mouse can be found here

Note 1:

invocation so predict() and update(y) should alternate. That is, each invocation to update(y) should be preceded by an invocation to predict(), and each invocation to predict() (except the first one) should be preceded by an invoation to update(y).

Note 2:

observations \(y_n\) should be sampled uniformly.

predict(x, P, B, Q)[source]

Predicts the next state.

Returns:

(state, covariance): tuple containing the predicted state vector and covariance matrix.

update(y, x, P, Z, R)[source]

Updates the current state and covariance.

Parameters:

y – observation \(\in\Re^M\)

Returns:

(state, covariance): tuple containing the updated state vector and covariance matrix.

ssm.inference.ekf_forecast(xnn, Pnn, B, Bdot, Q, h)[source]

Forecasts the mean and covariance of an EKF filter state at horizon h.

N: dimensionality of observations

M: dimnensionality of state

T: number of observations

Param:

xnn: filtered state mean at time n

Type:

xnn: \(\Re^{N\times 1\times T}\)

Param:

Pnn: filtered state covariance at time n

Type:

Pnn: \(\Re^{N\times N\times T}\)

Param:

B: state transition function

Type:

B: \(\Re^M\rightarrow\Re^M\)

Param:

Bdot: Jacobian of state transition function

Type:

Bdot: \(\Re^M\rightarrow\Re^{M\times M}\)

Param:

Q: state noise covariance function

Type:

Q: \(\Re^M\rightarrow\Re^{M\times M}\)

Param:

h: forecasting horizon

Type:

h: int

Returns:

(x_pred, P_pred): tuple containing the forecasted mean and covariance at time n+h

Return type:

tuple

ssm.inference.ekf_forecast_batch(xnn, Pnn, B, Bdot, Q, m0, V0, h)[source]
ssm.inference.filterEKF_withMissingValues_torch(y, B, Bdot, Q, m0, V0, Z, Zdot, R, Sn_reg=1e-05)[source]

Extended Kalman filter implementation of the algorithm described in Chapter 10 of Durbin and Koopman 2012.

N: dimensionality of observations

M: dimnensionality of state

T: number of observations

Param:

y: time series to be smoothed

Type:

y: \(\Re^{N\times T}\)

Param:

B: state transition function

Type:

B: \(\Re^M\rightarrow\Re^M\)

Param:

Bdot: Jacobian of state transition function

Type:

Bdot: \(\Re^M\rightarrow\Re^{M\times M}\)

Param:

Q: state noise covariance function

Type:

Q: \(\Re^M\rightarrow\Re^{M\times M}\)

Param:

m0: initial state mean

Type:

m0: \(\Re^{M}\)

Param:

V0: initial state covariance

Type:

V0: \(\Re^{M\times M}\)

Param:

Z: state to observation function

Type:

Z: \(\Re^M\rightarrow\Re^N\)

Param:

Zdot: Jacobian of state to observation function

Type:

Zdot: \(\Re^M\rightarrow\Re^{N\times M}\)

Param:

R: observations covariance function

Type:

R: \(\Re^M\rightarrow\Re^{N\times N}\)

Returns:

{xnn1, Pnn1, xnn, Pnn, innov, K, Sn, logLike}: xnn1 and Pnn1 (predicted means, MxT, and covariances, MxMxT), xnn and Pnn (filtered means, MxT, and covariances, MxMxT), innov (innovations, NxT), K (Kalman gain matrices, MxNxT), Sn (innovations covariance matrices, NxNxT), logLike (data loglikelihood, float).

Return type:

dictionary

ssm.inference.filterLDS_SS_withMissingValues_np(y, B, Q, m0, V0, Z, R)[source]

Kalman filter implementation of the algorithm described in Shumway and Stoffer 2006.

Param:

y: time series to be smoothed

Type:

y: numpy array (NxT)

Param:

B: state transition matrix

Type:

B: numpy matrix (MxM)

Param:

Q: state noise covariance matrix

Type:

Q: numpy matrix (MxM)

Param:

m0: initial state mean

Type:

m0: one-dimensional numpy array (M)

Param:

V0: initial state covariance

Type:

V0: numpy matrix (MxM)

Param:

Z: state to observation matrix

Type:

Z: numpy matrix (NxM)

Param:

R: observations covariance matrix

Type:

R: numpy matrix (NxN)

Returns:

{xnn1, Pnn1, xnn, Pnn, innov, K, Sn, logLike}: xnn1 and Pnn1 (predicted means, MxT, and covariances, MxMxT), xnn and Pnn (filtered means, MxT, and covariances, MxMxT), innov (innovations, NxT), K (Kalman gain matrices, MxNxT), Sn (innovations covariance matrices, NxNxT), logLike (data loglikelihood, float).

Return type:

dictionary

ssm.inference.filterLDS_SS_withMissingValues_torch(y, B, Q, m0, V0, Z, R)[source]

Kalman filter implementation of the algorithm described in Shumway and Stoffer 2006.

Param:

y: time series to be smoothed

Type:

y: numpy array (NxT)

Param:

B: state transition matrix

Type:

B: numpy matrix (MxM)

Param:

Q: state noise covariance matrix

Type:

Q: numpy matrix (MxM)

Param:

m0: initial state mean

Type:

m0: one-dimensional numpy array (M)

Param:

V0: initial state covariance

Type:

V0: numpy matrix (MxM)

Param:

Z: state to observation matrix

Type:

Z: numpy matrix (NxM)

Param:

R: observations covariance matrix

Type:

R: numpy matrix (NxN)

Returns:

{xnn1, Pnn1, xnn, Pnn, innov, K, Sn, logLike}: xnn1 and Pnn1 (predicted means, MxT, and covariances, MxMxT), xnn and Pnn (filtered means, MxT, and covariances, MxMxT), innov (innovations, NxT), K (Kalman gain matrices, MxNxT), Sn (innovations covariance matrices, NxNxT), logLike (data loglikelihood, float).

Return type:

dictionary

ssm.inference.lds_forecast(xnn, Pnn, B, Q, h)[source]

Forecasts the mean and covariance of a Kalman filter state at horizon h.

Args:

xnn (numpy array or torch tensor): Filtered state mean at time n, shape (state_dim,) Pnn (numpy array or torch tensor): Filtered state covariance at time n, shape (state_dim, state_dim) B (numpy or torch matrix): State transition matrix, shape (state_dim, state_dim) Q (numpy or torch matrix): Process noise covariance matrix, shape (state_dim, state_dim) h (int): Forecast horizon

Returns:

x_pred (Tensor): Forecasted mean at time n+h P_pred (Tensor): Forecasted covariance at time n+h

ssm.inference.lds_forecast_batch(xnn, Pnn, B, Q, m0, V0, h)[source]

Forecasts the mean and covariance of a batch of Kalman filter states at horizon h. The first forecasted sample corresponds to sample time h-1 and the last forecasted sample correspond to sample time N+h-1; thus this function return N+1 samples.

Param:

xnn: filtered state mean at time n

Type:

xnn: numpy array or torch tensor, shape (state_dim,)

Param:

Pnn: filtered state covariance at time n

Type:

Pnn: numpy array or torch tensor, shape (state_dim, state_dim)

Param:

B: state transition matrix

Type:

B: numpy or torch matrix, shape (state_dim, state_dim)

Param:

Q: process noise covariance matrix

Type:

Q: numpy or torch matrix, shape (state_dim, state_dim)

Param:

h: forecast horizon

Type:

h: int

Returns:

(x_pred, P_pred): tuple containing the forecasted mean and covariance at time n+h

Return type:

tuple

ssm.inference.logLikeEKF_withMissingValues_torch(y, B, Bdot, Q, m0, V0, Z, Zdot, R, Sn_reg=1e-05)[source]

Calculation of the log-likelihood for the extended Kalman filter model.

N: dimensionality of observations

M: dimnensionality of state

T: number of observations

Param:

y: time series to be smoothed

Type:

y: \(\Re^{N\times T}\)

Param:

B: state transition function

Type:

B: \(\Re^M\rightarrow\Re^M\)

Param:

Bdot: Jacobian of state transition function

Type:

Bdot: \(\Re^M\rightarrow\Re^{M\times M}\)

Param:

Q: state noise covariance function

Type:

Q: \(\Re^M\rightarrow\Re^{M\times M}\)

Param:

m0: initial state mean

Type:

m0: \(\Re^{M}\)

Param:

V0: initial state covariance

Type:

V0: \(\Re^{M\times M}\)

Param:

Z: state to observation function

Type:

Z: \(\Re^M\rightarrow\Re^N\)

Param:

Zdot: Jacobian of state to observation function

Type:

Zdot: \(\Re^M\rightarrow\Re^{N\times M}\)

Param:

R: observations covariance function

Type:

R: \(\Re^M\rightarrow\Re^{N\times N}\)

Returns:

{xnn1, Pnn1, xnn, Pnn, innov, K, Sn, logLike}: xnn1 and Pnn1 (predicted means, MxT, and covariances, MxMxT), xnn and Pnn (filtered means, MxT, and covariances, MxMxT), innov (innovations, NxT), K (Kalman gain matrices, MxNxT), Sn (innovations covariance matrices, NxNxT), logLike (data loglikelihood, float).

Return type:

dictionary

ssm.inference.logLikeLDS_withMissingValues_torch(y, B, Q, m0, V0, Z, R)[source]

Kalman filter implementation of the algorithm described in Shumway and Stoffer 2006.

N: dimensionality of observations

M: dimnensionality of state

T: number of observations

Param:

y: time series to be smoothed

Type:

y: numpy array (NxT)

Param:

B: state transition matrix

Type:

B: numpy matrix (MxM)

Param:

Q: state noise covariance matrix

Type:

Q: numpy matrix (MxM)

Param:

m0: initial state mean

Type:

m0: one-dimensional numpy array (M)

Param:

V0: initial state covariance

Type:

V0: numpy matrix (MxM)

Param:

Z: state to observation matrix

Type:

Z: numpy matrix (NxM)

Param:

R: observations covariance matrix

Type:

R: numpy matrix (NxN)

Returns:

logLike (data loglikelihood, float).

Return type:

dictionary

ssm.inference.log_like_observations_given_forecasts_ekf(h, y, x_pred, P_pred, Z, Zdot, R)[source]
ssm.inference.log_like_observations_given_forecasts_lds(h, y, x_pred, P_pred, Z, R)[source]
ssm.inference.smoothLDS_SS(B, xnn, Pnn, xnn1, Pnn1, m0, V0)[source]

Kalman smoother implementation

Param:

B: state transition matrix

Type:

B: numpy matrix (MxM)

Param:

xnn: filtered means (from Kalman filter)

Type:

xnn: numpy array (MxT)

Param:

Pnn: filtered covariances (from Kalman filter)

Type:

Pnn: numpy array (MxMXT)

Param:

xnn1: predicted means (from Kalman filter)

Type:

xnn1: numpy array (MxT)

Param:

Pnn1: predicted covariances (from Kalman filter)

Type:

Pnn1: numpy array (MxMXT)

Param:

m0: initial state mean

Type:

m0: one-dimensional numpy array (M)

Param:

V0: initial state covariance

Type:

V0: numpy matrix (MxM)

Returns:

{xnN, PnN, Jn, x0N, V0N, J0}: xnn1 and Pnn1 (smoothed means, MxT, and covariances, MxMxT), Jn (smoothing gain matrix, MxMxT), x0N and V0N (smoothed initial state mean, M, and covariance, MxM), J0 (initial smoothing gain matrix, MxN).

ssm.learning module

ssm.learning.em_SS_LDS(y, B0, Q0, Z0, R0, m0_0, V0_0, max_iter=50, tol=0.0001, vars_to_estimate={'B': True, 'Q': True, 'R': True, 'V0': True, 'Z': True, 'm0': True})[source]
ssm.learning.em_SS_tracking(y, B, sigma_a0, Qe, Z, R_0, m0_0, V0_0, vars_to_estimate={'R': True, 'V0': True, 'm0': True, 'sigma_a': True}, max_iter=50, tolerance_change=1e-09)[source]
ssm.learning.lag1CovSmootherLDS_SS(Z, KN, B, Pnn, Jn, J0)[source]
ssm.learning.posteriorCorrelationMatrices(Z, B, KN, Pnn, xnN, PnN, x0N, V0N, Jn, J0)[source]
ssm.learning.scipy_optimize_SS_tracking_diagV0(y, B, sigma_ax0, sigma_ay0, Qe, Z, sqrt_diag_R_0, m0_0, sqrt_diag_V0_0, max_iter=50, disp=True)[source]
ssm.learning.scipy_optimize_SS_tracking_fullV0(y, B, sigma_a0, Qe, Z, diag_R_0, m0_0, V0_0, max_iter, disp=True)[source]
ssm.learning.torch_adam_optimize_SS_tracking_diagV0(y, B, sigma_a0, Qe, Z, sqrt_diag_R_0, m0_0, sqrt_diag_V0_0, max_iter=50, lr=0.001, eps=1e-08, vars_to_estimate={'R': True, 'V0': True, 'm0': True, 'sigma_a': True})[source]
ssm.learning.torch_lbfgs_optimize_SS_tracking_diagV0(y, B, Qe, Z, sigma_a0, pos_x_R_std0, pos_y_R_std0, m0_0, sqrt_diag_V0_0, max_iter=20, lr=1.0, tolerance_grad=1e-07, tolerance_change=1e-09, n_epochs=100, tol=1e-06, line_search_fn='strong_wolfe', vars_to_estimate={'m0': True, 'pos_x_R_std': True, 'pos_y_R_std': True, 'sigma_a': True, 'sqrt_diag_V0': True})[source]
ssm.learning.torch_lbfgs_optimize_kinematicsHO_logLikeEKF_diagV0(dt, y, sigma_a0, cos_theta_Q_std0, sin_theta_Q_std0, omega_Q_std0, pos_x_R_std0, pos_y_R_std0, cos_theta_R_std0, sin_theta_R_std0, alpha0, m0_kinematics_0, m0_HO_0, sqrt_diag_V0_kinematics_0, sqrt_diag_V0_HO_0, max_iter=20, lr=1.0, tolerance_grad=1e-07, tolerance_change=1e-09, line_search_fn='strong_wolfe', n_epochs=100, tol=1e-06, vars_to_estimate={'alpha': True, 'cos_theta_Q_std': True, 'm0_HO': True, 'm0_kinematics': False, 'omega_Q_std': True, 'sigma_a': False, 'sin_theta_Q_std': True, 'sqrt_cos_theta_R_std': True, 'sqrt_diag_V0_HO': True, 'sqrt_diag_V0_kinematics': False, 'sqrt_pos_x_R_std': False, 'sqrt_pos_y_R_std': False, 'sqrt_sin_theta_R_std': True}, disp=True)[source]

ssm.simulation module

ssm.simulation.simulateLDS(T, B, Q, m0, V0, Z, R)[source]

Simulation of linear dynamical system

Param:

T: number of observations

Type:

T: int

Param:

B: state transition matrix

Type:

B: numpy matrix (MxM)

Param:

Q: state noise covariance matrix

Type:

Q: numpy matrix (MxM)

Param:

m0: initial state mean

Type:

m0: one-dimensional numpy array (M)

Param:

V0: initial state covariance

Type:

V0: numpy matrix (MxM)

Param:

Z: state to observation matrix

Type:

Z: numpy matrix (NxM)

Param:

R: observations covariance matrix

Type:

R: numpy matrix (NxN)

Returns:

(initial state, states, observations)

Return type:

tuple(numpy.array[M], numpy.array[M, T], numpy.array[P, T])

ssm.simulation.simulateNDSgaussianNoise(T, B, Q, m0, V0, Z, R)[source]

Simulation of nonlinear dynamical system with Gaussian noise

param:

T: number of observations

type:

T: int

param:

B: state transition function

type:

B: :math:`Re^M

ightarrowRe^M`

param:

Q: state noise covariance function

type:

Q: :math:`Re^M

ightarrowRe^{M imes M}`

param:

m0: initial state mean

type:

m0: one-dimensional numpy array (M)

param:

V0: initial state covariance

type:

V0: numpy matrix (MxM)

param:

Z: state to observation function

type:

Z: :math:`Re^M

ightarrowRe^N`

param:

R: observations covariance function

type:

R: :math:`Re^M

ightarrowRe^{N imes N}`

return:

(initial state, states, observations)

rtype:

tuple(numpy.array[M], numpy.array[M, T], numpy.array[P, T])

Module contents