1. Data structures#
spikes_times
: data structure containing spikes times (\(\mathbf{t}_n^{(r)}\) in Eq. 6 of Duncker and Sahani [DS18]).Double list of length
n_trials
byn_neurons
such thatspikes_times[r][n]
is a list-like collection of spikes times for trialsr
and neuronn
.ind_points_locs
: data structure containing inducing points locations (\(\mathbf{z}_k^{(r)}\) in Eq. 3 of Duncker and Sahani [DS18]).List of length
n_latents
of PyTorch tensors of size (n_trials
,n_ind_points
, 1), such thatind_points_locs[k][r, :, 0]
gives the inducing points locations for trialr
and latentk
.leg_quad_points
andleg_quad_weights
: data structures containing the Legendre quadrature points and weights, respectively, used for the calculation of the integral in the first term of the expected posterior log-likelihood in Eq. 7 in Duncker and Sahani [DS18].Both
leg_quad_points
andleg_quad_weights
are tensors of size (n_trials
,n_quad_elem
, 1), such thatleg_quad_points[r, :, 0]
andleg_quad_weights[r, :, 0]
give the quadrature points and weights, respectively, of trialr
.var_mean
: mean of the variational distribution \(q(\mathbf{u}_k^{(r)})\) (\(\mathbf{m}_k^{(r)}\) in the paragraph above Eq. 4 of Duncker and Sahani [DS18]).List of length
n_latents
of PyTorch tensors of size (n_trials
,n_ind_points
, 1), such thatvar_mean[k][r, :, 0]
gives the variational mean for trialr
and latentk
.var_chol
: vectorized cholesky factor of the covariance of the variational distribution \(q(\mathbf{u}_k^{(r)})\) (\(S_k^{(r)}\) in the paragraph above Eq. 4 of Duncker and Sahani [DS18]).List of length
n_latents
of PyTorch tensors of size (n_trials
,n_ind_points
* (n_ind_points
+ 1)/2, 1), such thatvar_chol[k][r, :, 0]
gives the vectorized cholesky factor of the variational covariance for trialr
and latentk
.emb_post_mean_quad
: embedding posterior mean (\(\nu_n^{(r)}(t)\) in Eq. 5 of Duncker and Sahani [DS18]) evaluated at the Legendre quadrature points.List of length
n_latents
of PyTorch tensor of size (n_trials
, n_ind_points, 1), such thatemb_post_mean[k][r, :, 0]
gives the embedding posterior mean for trialr
and latentk
, evaluated atleg_quad_points[r, :, 0]
.emb_post_var_quad
: embedding posterior variance (\(\sigma_n^{(r)}(t, t)\) in Eq. 5 of Duncker and Sahani [DS18]) evaluated at the Legendre quadrature points.List of length
n_latents
of PyTorch tensors of size (n_trials
, n_ind_points, 1), such thatemb_post_var[k][r, :, 0]
gives the embedding posterior variance for trialr
and latentk
, evaluated atleg_quad_points[r, :, 0]
.Kzz
: kernel covariance function evaluated at inducing points locations (\(K_{zz}^{(kr)}\) in Eq. 5 of Duncker and Sahani [DS18]).List of length
n_latents
of PyTorch tensors of PyTorch tensors of size (n_trials
,n_ind_points
,n_ind_points
), such thatKzz[k][r, i, j]
is the kernel covariance function for latentk
evaluated at the ith and kth components of the inducing point locations for latentk
and trialr
(Kzz[k][r, i, j]
= \(\kappa_k(\mathbf{z}_k^{(r)}[i], \mathbf{z}_k^{(r)}[j])\)).Kzz_inv
: lower-triangular cholesky factors (\(L^{(kr)}\)) of kernel covariance matrices evaluated at inducing points locations (\(K_{zz}^{(kr)}=L^{(kr)}\left(L^{(kr)}\right)^\intercal\)).List of length
n_latents
of PyTorch tensors of PyTorch tensors of size (n_trials
,n_ind_points
,n_ind_points
), such thatKzz_inv[k][r, :, :]
is the lower-triangular cholesky factor \(L^{(kr)}\).Ktz
: kernel covariance function evaluated at the quadrature points and at the inducing points locations (\(\kappa_k(t, \mathbf{z}_k^{(r)})\) in Eq. 5 of Duncker and Sahani [DS18]).
List of length
n_latents
of PyTorch tensors of size (n_trials
,n_quad_elem
,n_ind_points
), such thatKzz[k][r, i, j]
is the kernel covariance function for latentk
evaluated at the ith quadrature time for trialr
(leg_quad_points[r, :, 0]
) and at the jth inducing points location for trialr
and latentk
(Kzz[k][r, i, j]
= \(\kappa_k(\text{leg_quad_points}[r, i, 0], \mathbf{z}_k^{(r)}[j]\)).
Ktt
: kernel covariance function evaluated at quadrature points (\(\kappa_k(t, t)\) in Eq. 5 of Duncker and Sahani [DS18]).Note: svGPFA does not need to evaluate \(\kappa_k(t, t')\) for \(t\neq t'\). It only needs to evaluate \(\kappa_k(t, t)\) to calculate the variance of the posterior embedding \(\sigma^2_n(t, t)\), which is used to compute \(\mathbb{E}_{q(h_n^{(r)})}\left[g(h_n^{(r)}(t))\right]\).
List of length
n_latents
of PyTorch tensors of size (n_trials
,n_quad_elem
,n_latents
), such thatKtt[k][r, i, k]
is the kernel variance function for latentk
evaluated at the ith quadrature time for trialr
(leg_quad_points[r, i, 0]
). That isKtt[k][r, i, k]
= \(\kappa_k(\text{leg_quad_points[r, i, 0]}, \text{leg_quad_points[r, i, 0]})\).