[Tutorial with references and code]
The original. Assumes independent GP latents, an LNP observation model. Easy-ish inference. Good implementation in Elephant.
[Good PyTorch version]
Chethan Pandarinath, David Sussillo and co.
This is the big one. What's described is not so much one model as an entire family of models around the same theme. The most vanilla version of LFADS is a VAE which compresses the spike data into a single start state, which is then decompressed by a free-running RNN. Other variants include a controller and inferred control variables for more flexibility. The labels of the data can also be used to guide the latent representation. The most readable introduction is in this tutorial: https://github.com/google-research/computation-thru-dynamics/blob/master/notebooks/LFADS Tutorial.ipynb
This paper is not about brains, but it's necessary to understand pi-VAE. It introduces a volume-preserving variant of RealNVP, which uses labels u as side information. Volume-preserving means the determinant of the Jacobian is 1, which is imposed by removing the mean from the scaling weights s. Where do the labels u get mixed in? They simply get appended to the inputs x - then they are processed using the regular mechanism of mixing the first half with the second half with the RealNVP-like block:
They have to play a few tricks in order to scale this to large images. In particular, the mapping function g^-1: x → z downsamples images early using the standard trick of downsampling by changing 2x2 blocks into 1x1 blocks with 4 times the channels, and discarding 3/4 of the blocks. The model remains invertible in that the full set of latents, which is the same size as the original images, can be recovered by concatenation (similar to Glow and RealNVP):
To facilitate this process, they've created a software package called FReiA, based on PyTorch, since it becomes quickly unwieldy to construct complex invertible models and keep track of all the loose ends.
The likelihood to be optimized is: