3. Low-level interface#
The class SVGPFAModelFactory creates
an svGPFA model and an instance of the class SVEM
optimises its parameters. Please refer to the svGPFA class and interaction diagrams.
There is a one-to-one mapping between classes in the svGPFA.stats package and equations in Duncker and Sahani [DS18].
Class
SVLowerBoundcorresponds to the right-hand-side ofEq.4. This class uses theExpectedLogLikelihoodandKLDivergenceclasses, described next.The abstract class
ExpectedLogLikelihoodcorresponds to the first term of the right-hand-side ofEq.4.The abstract subclass
PointProcessELLimplements the functionality ofExpectedLogLikelihoodfor point-process observations, and corresponds toEq.7. If the link function (i.e., g inEq.7) is the exponential function, then the one-dimensional integral in the first term ofEq.7can be solved analytically (concrete subclassPointProcessELLExpLink). For other link functions we can solve this integral using Gaussian quadrature (concrete subclassPointProcessELLQuad).Similarly, the subclasses
PoissonELL,PoissonELLExpLinkandPoissonELLQuadimplement the functionality ofExpectedLogLikelihoodfor Poisson observations.
The concrete class
KLDivergencecorresponds to the second term of the right-hand-side ofEq.4and implements the KL divergence between the prior on inducing points, \(p(\mathbf{u}_k^{(r)})\), and the posterior on inducing points, \(q(\mathbf{u}_k^{(r)})\).ExpectedLogLikelihoodusesSVEmbedding, which calculates the mean and variance of the svGPFA embedding (\(h_n^{(r)}\) inEq.1), given inEq.5.SVEmbeddingis an abstract class, which hasLinearSVEmbeddingas abstract sublcass. Two concrete subclasses ofLinearSVEmbeddingare provided, which optimise the calculation of the embedding for two different uses inEq.7.The first term in the right-hand-side of
Eq.7requires the calculation of the embedding at sample times in a grid, which are the same for all neurons. This calculation is implemented inLinearSVEmbeddingAllTimes.The second term in the right-hand-side of
Eq.7requires the calculation of the embedding at spike times, which are different for each neuron. This calculation is implemented inLinearSVEmbeddingAssocTimes.
SVEmbeddingusesSVPosteriorOnLatents, which calculates the mean and variance of the latent variables, \(x_k^{(r)}\) inEq.1. These means and variances are not described by their own equations in Duncker and Sahani, 2018, but are embedded inEq.5. They are\[\begin{split}\nu_k^{(r)}(t) &= \kappa_k(t,z_k)K_{zz}^{(k)^{-1}}m_k^{(r)}\\ \sigma_k^{(r)}(t) &= \kappa_k(t,t)+\mathbf{\kappa}_k(t,\mathbf{z}_k)\left(K_{zz}^{(k)^{-1}}S_k^{(r)}K_{zz}^{(k)^{-1}}-K_{zz}^{(k)^{ -1}}\right)\mathbf{\kappa}_k(\mathbf{z}_k,t)\end{split}\]SVPosteriorOnLatentsis an abstract class. As above, two concrete subclasses are provided.SVPosteriorOnLatentsAllTimescomputes the means and variances in a grid of time points andSVPosteriorOnLatentsAssocTimescalculates these statistics at spike times.SVPosteriorOnLatentsusesKernelsMatricesStore, which stores kernel matrices between inducing points, \(K_{zz}\), between time points, \(K_{tt}\), and between time points and inducing points, \(K_{tz}\).KernelsMatricesStoreis an abstract class with two subclasses.IndPointsLocsKMSis a concrete subclass ofKernelsMatricesStorethat stores kernel matrices between inducing points, and their Cholesky decompositions.IndPointsLocsAndTimesKMSis an abstract subclass ofKernelsMatricesStorewhich stores covariance matrices between time points and between time points and inducing points. As above,IndPointsLocsAndAllTimesKMSandIndPointsLocsAndAssocTimesKMSare concrete subclasses ofIndPointsLocsAndTimesKMSfor times points in a grid and for spike times, respectively.KernelsMatricesStoreusesKernel, which is an abstract class for constructing kernel matrices. Concrete subclasses construct kernel matrices for specific types of kernels (e.g.,ExponentialQuadraticKernelandPeriodicKernel).