Author: Sina
-
RL Cheatsheet for Olfactory Navigation
Olivia is doing some cool olfactory navigation experiments and I thought collecting the following concepts from RL would be useful for her (and for me).
-
Sequential Sigmoidal Factor Analysis
In Harris et al. the authors study how the decoding weights that read out the identity of the odour stimuli from neural activity change over time. To describe these changes, the Mihaly has developed a parsimonious model called the “Drift Decoder”, which models the weights as the sum of an initial value, plus a temporal…
-
A shared code for perception and imagery in ventral temporal cortex
Some of the highlights of the Science paper by Wadia et al. from the Rutishauser and Tsao labs comparing ventral temporal cortex represents perceived vs imagined images.
-
Relating Gain and Crowding in the Diagonal Model
When we linearized the diagonal model we determined the gains relative to unity as $$ \bdelta = (\GG^T \GG + \lambda \II)^{-1} \GG^T \rr,$$ where $\rr$ is the vectorized residual $\SS – \XX^T \XX$. We’d like to not just report these numbers, but explain them. Complexity in the explanation derives from the correlations in the…
-
Linearizing Covariance for the Free Model, Part III
We consider the solution path of connectivity solutions, and use high regularization to approximate the free connectivity by the first derivative of the regularization path evaluated at infinite regularization.
-
Cosyne 2026
My running notes on Cosyne 2026.
-
Neurips 2025 Day 3
My notes on Day 3 of NeurIPS 2025, posting three months later as is.
-
Neurips 2025 Day 1
My notes on Day of NeurIPS 2025. I wanted to wait till they were more complete, but it’s three months later now and they’re useful as as!
-
Neurips 2025 Day 2
My notes on Day 2 of Neurips 2025.
-
Linearizing Covariance for the Free Model, Part II
I extend the linearization to include non-linear diagonal terms, but at least the simplest approximation doesn’t capture the large values we’re after.
-
Linearizing the Covariance Loss for the Free Model
I linearize the covariance for the Free model, and find that I need to include an additional diagonal component.
-
Deep Linear Networks Learn Hierarchical Structure
Running notes on Saxe et al.’s “A mathematical theory of semantic development in deep neural networks.”
-
Linearizing the Covariance Loss
We’re after insight, not an exact solution. ChatGPT had a good suggestion to linearize the loss around $\zz = 1$. In this post we do that.
-
The Diagonal Model with Centering
We’re going to try to make sense of the solutions to minimizing the following $$L(\zz) = {1 \over 2} \|\XX^T \ZZ^T \JJ \ZZ \XX – \SS\|_F^2 + {\lambda \over 2}\|\zz – \bone\|_2^2$$
-
Plume Marginalization
I workout the expression for the likelihood after marginalizing out flow changes.
-
When to Smell in Stereo?
We show that stereo-olfaction beats mono-olfaction when searching surfaces for for olfactory edges.
-
Unary vs. Binary Expressions of Independence
I discuss the unary and binary expressions of independence and how their meanings are slightly different.
-
Markov Blankets in Bayesian Networks
We determine how to find the Markov boundary of a node by first looking at some examples, then using a formal derivation.
-
Synchronization with Bimodal Spines
I update the activity-dependent synchrony model to make spines bimodal, increasing their inhibition when their parent GC is active.
-
Activity-Dependent Synchronization of Linear Integrate and Fire Units
We’re interested in activity-dependent synchronization. This is where the inhibition that a mitral cell receives from a granule cell spine requires that spine to have been previously depolarized enough to activate the NMDA channels that are required (via both the additional depolarization of the spine and increased Ca2+ influx they provide) to cause vesicle release.…