lds package
Submodules
lds.inference module
- class lds.inference.OnlineKalmanFilter(B, Q, m0, V0, Z, R)[source]
Bases:
object
Class implementing the online Kalman filter algorithm for the following linear dynamical system:
\[ \begin{align}\begin{aligned}x_n &= B\ x_{n-1} + w_n,\ \textrm{where}\ w_n\sim\mathcal{N}(w_n|0, Q)\, \textrm{and}\ x_n\in\Re^M\\y_n &= Z\ x_{n-1} + v_n,\ \textrm{where}\ v_n\sim\mathcal{N}(v_n|0, R)\, \textrm{and}\ y_n\in\Re^N\\x_0&\in\mathcal{N}(x_0|m_0, V_0)\end{aligned}\end{align} \]Example use:
online_kf = OnlineKalmanFilter(B, Q, m0, V0, Z, R) x_pred, P_pred = online_kf.predict() for y in ys: x_filt, P_filt = online_kf.update(y) x_pred, P_pred = online_kf.predict()
A script using OnlineKalmanFilter for tracking the position of a mouse can be found here
- Note 1:
invocation so predict() and update(y) should alternate. That is, each invocation to update(y) should be preceded by an invocation to predict(), and each invocation to predict() (except the first one) should be preceded by an invoation to update(y).
- Note 2:
observations \(y_n\) should be sampled uniformly.
- property B
- property P
- property Q
- property R
- property V0
- property Z
- property m0
- predict()[source]
Predicts the next state.
- Returns:
(state, covariance): tuple containing the predicted state vector and covariance matrix.
- update(y)[source]
Updates the current state and covariance.
- Parameters:
y – observation \(\in\Re^M\)
- Returns:
(state, covariance): tuple containing the updated state vector and covariance matrix.
- property x
- class lds.inference.TimeVaryingOnlineKalmanFilter[source]
Bases:
object
Class implementing the time-varying and online Kalman filter algorithm for the following linear dynamical system:
\[ \begin{align}\begin{aligned}x_n &= B_n\ x_{n-1} + w_n,\ \textrm{where}\ w_n\sim\mathcal{N}(w_n|0, Q_n)\, \textrm{and}\ x_n\in\Re^M\\y_n &= Z_n\ x_{n-1} + v_n,\ \textrm{where}\ v_n\sim\mathcal{N}(v_n|0, R_n)\, \textrm{and}\ y_n\in\Re^N\\x_0&\in\mathcal{N}(x_0|m_0, V_0)\end{aligned}\end{align} \]Example use:
online_kf = OnlineKalmanFilter(m0, V0) x_pred, P_pred = online_kf.predict() for y in ys: x_filt, P_filt = online_kf.update(y) x_pred, P_pred = online_kf.predict()
A script using OnlineKalmanFilter for tracking the position of a mouse can be found here
- Note 1:
invocation so predict() and update(y) should alternate. That is, each invocation to update(y) should be preceded by an invocation to predict(), and each invocation to predict() (except the first one) should be preceded by an invoation to update(y).
- Note 2:
observations \(y_n\) should be sampled uniformly.
- lds.inference.filterLDS_SS_withMissingValues_np(y, B, Q, m0, V0, Z, R)[source]
Kalman filter implementation of the algorithm described in Shumway and Stoffer 2006.
- Param:
y: time series to be smoothed
- Type:
y: numpy array (NxT)
- Param:
B: state transition matrix
- Type:
B: numpy matrix (MxM)
- Param:
Q: state noise covariance matrix
- Type:
Q: numpy matrix (MxM)
- Param:
m0: initial state mean
- Type:
m0: one-dimensional numpy array (M)
- Param:
V0: initial state covariance
- Type:
V0: numpy matrix (MxM)
- Param:
Z: state to observation matrix
- Type:
Z: numpy matrix (NxM)
- Param:
R: observations covariance matrix
- Type:
R: numpy matrix (NxN)
- Returns:
{xnn1, Vnn1, xnn, Vnn, innov, K, Sn, logLike}: xnn1 and Vnn1 (predicted means, MxT, and covariances, MxMxT), xnn and Vnn (filtered means, MxT, and covariances, MxMxT), innov (innovations, NxT), K (Kalman gain matrices, MxNxT), Sn (innovations covariance matrices, NxNxT), logLike (data loglikelihood, float).
- Return type:
dictionary
- lds.inference.filterLDS_SS_withMissingValues_torch(y, B, Q, m0, V0, Z, R)[source]
Kalman filter implementation of the algorithm described in Shumway and Stoffer 2006.
- Param:
y: time series to be smoothed
- Type:
y: numpy array (NxT)
- Param:
B: state transition matrix
- Type:
B: numpy matrix (MxM)
- Param:
Q: state noise covariance matrix
- Type:
Q: numpy matrix (MxM)
- Param:
m0: initial state mean
- Type:
m0: one-dimensional numpy array (M)
- Param:
V0: initial state covariance
- Type:
V0: numpy matrix (MxM)
- Param:
Z: state to observation matrix
- Type:
Z: numpy matrix (NxM)
- Param:
R: observations covariance matrix
- Type:
R: numpy matrix (NxN)
- Returns:
{xnn1, Vnn1, xnn, Vnn, innov, K, Sn, logLike}: xnn1 and Vnn1 (predicted means, MxT, and covariances, MxMxT), xnn and Vnn (filtered means, MxT, and covariances, MxMxT), innov (innovations, NxT), K (Kalman gain matrices, MxNxT), Sn (innovations covariance matrices, NxNxT), logLike (data loglikelihood, float).
- Return type:
dictionary
- lds.inference.logLikeLDS_SS_withMissingValues_torch(y, B, Q, m0, V0, Z, R)[source]
Kalman filter implementation of the algorithm described in Shumway and Stoffer 2006.
- Param:
y: time series to be smoothed
- Type:
y: numpy array (NxT)
- Param:
B: state transition matrix
- Type:
B: numpy matrix (MxM)
- Param:
Q: state noise covariance matrix
- Type:
Q: numpy matrix (MxM)
- Param:
m0: initial state mean
- Type:
m0: one-dimensional numpy array (M)
- Param:
V0: initial state covariance
- Type:
V0: numpy matrix (MxM)
- Param:
Z: state to observation matrix
- Type:
Z: numpy matrix (NxM)
- Param:
R: observations covariance matrix
- Type:
R: numpy matrix (NxN)
- Returns:
logLike (data loglikelihood, float).
- Return type:
dictionary
- lds.inference.smoothLDS_SS(B, xnn, Vnn, xnn1, Vnn1, m0, V0)[source]
Kalman smoother implementation
- Param:
B: state transition matrix
- Type:
B: numpy matrix (MxM)
- Param:
xnn: filtered means (from Kalman filter)
- Type:
xnn: numpy array (MxT)
- Param:
Vnn: filtered covariances (from Kalman filter)
- Type:
Vnn: numpy array (MxMXT)
- Param:
xnn1: predicted means (from Kalman filter)
- Type:
xnn1: numpy array (MxT)
- Param:
Vnn1: predicted covariances (from Kalman filter)
- Type:
Vnn1: numpy array (MxMXT)
- Param:
m0: initial state mean
- Type:
m0: one-dimensional numpy array (M)
- Param:
V0: initial state covariance
- Type:
V0: numpy matrix (MxM)
- Returns:
{xnN, VnN, Jn, x0N, V0N, J0}: xnn1 and Vnn1 (smoothed means, MxT, and covariances, MxMxT), Jn (smoothing gain matrix, MxMxT), x0N and V0N (smoothed initial state mean, M, and covariance, MxM), J0 (initial smoothing gain matrix, MxN).
lds.learning module
- lds.learning.em_SS_LDS(y, B0, Q0, Z0, R0, m0, V0, max_iter=50, tol=0.0001, varsToEstimate={'B': True, 'Q': True, 'R': True, 'V0': True, 'Z': True, 'm0': True})[source]
- lds.learning.em_SS_tracking(y, B, sigma_a0, Qe, Z, R_0, m0_0, V0_0, vars_to_estimate={'R': True, 'V0': True, 'm0': True, 'sigma_a': True}, max_iter=50, tolerance_change=1e-09)[source]
- lds.learning.scipy_optimize_SS_tracking_diagV0(y, B, sigma_ax0, sigma_ay0, Qe, Z, sqrt_diag_R_0, m0_0, sqrt_diag_V0_0, max_iter=50, disp=True)[source]
- lds.learning.scipy_optimize_SS_tracking_fullV0(y, B, sigma_a0, Qe, Z, diag_R_0, m0_0, V0_0, max_iter, disp=True)[source]
- lds.learning.torch_adam_optimize_SS_tracking_diagV0(y, B, sigma_a0, Qe, Z, sqrt_diag_R_0, m0_0, sqrt_diag_V0_0, max_iter=50, lr=0.001, eps=1e-08, vars_to_estimate={'R': True, 'V0': True, 'm0': True, 'sigma_a': True})[source]
- lds.learning.torch_lbfgs_optimize_SS_tracking_diagV0(y, B, sigma_a0, Qe, Z, sqrt_diag_R_0, m0_0, sqrt_diag_V0_0, max_iter=20, lr=1.0, tolerance_grad=1e-07, tolerance_change=1e-09, n_epochs=100, tol=1e-06, line_search_fn='strong_wolfe', vars_to_estimate={'R': True, 'V0': True, 'm0': True, 'sigma_a': True}, disp=True)[source]