We saw before that synchronization is not necessary for odour inference when the inputs are homogeneous Poisson processes. All that matters is the frequency of firing of each channel, not the pattern of firing. This is likely due to the homogeity of the Poisson processes – cofluctuations don’t mean anything in that context.
However, real-world odours arrive on plumes. In that setting, cofluctuations among channels are meaningful, and indicate the excitation caused by the same odour source. It seems natural that the olfactory system would be sensitive to such cofluctuations, and such sensitivity may show up through synchrony.
To this end, let’s suppose that each odour, indexed by $j$, has a time-dependent concentration profile determined by a corresponding plume, $f_j$, producing a time-dependent signal $$z_j(t) = x_j f_j(t).$$ Note the magnitude ambiguity here.
This time-dependent odour signal produces a time-dependent excitation of each glomerulus. We’ll assume the mean-rate of this signal is linear in the odour input according to the affinities, so $$ \EE y_i(t) = \sum_j A_{ij} x_j f_j(t).$$
The animal doesn’t care about the plume $f_j$, only about the odour identities $x_j$. So the key quantity is $p(\xx|\yy(0\dots T) \triangleq \YY).$ To get it, it has to marginalize out the plume $$ p(\xx|\YY) \propto p(\xx) p(\YY|\xx) = p(\xx) \int p(\YY|\ff, \xx) p(\ff) d\ff.$$
Performing this marginalization all at once in “batch” mode seems unrealistic, so let’s try to do it iteratively. Let’s assume that we already have an estimate $p(\xx|\yy_0^{t-1})$, and we want to update it for the new observation at time $t$. \begin{align*} p(\xx|\yy_t, \yy_0^{t-1}) \propto p(\yy_t|\xx, \yy_0^{t-1}) p(\xx|\yy_0^{t-1}).\end{align*}
Now $\yy_t$ given the past is only dependent on $\yy_{t-1}$, via the plume, specifically on how it changed in the intervening interval: \begin{align*} \EE y_i(t) &= \sum_j A_{ij} x_j f_j(t)\\ &= \sum_j A_{ij} x_j (f_j(t-1) + \Delta f_j(t))\\ &= \EE y_i(t-1) + \sum_j A_{ij} x_j\Delta f_j(t).\end{align*}
So we can switch coordinates to flow changes $\Delta f$, and input changes $\Delta \yy$: $$ \EE \Delta y_i(t) = \sum_j A_{ij} x_j \Delta f_j(t). $$
To avoid keeping all the deltas around, let’s rename $$ \yy \leftarrow \Delta \yy, \quad \ff \leftarrow \Delta \ff.$$
Then, dropping the $t$ subscript for now, we can write $$p(\yy|\xx) = \int p(\yy|\ff, \xx) p(\ff) d\ff.$$
Let’s assume that $\yy$ are normally distributed given $\ff$ and $\xx$ with variance $\sigma^2$, and that $\ff$ is also normally distributed with mean 0 and variance $\sigma^2_f$. Then $$ \log p(\yy|\ff, \xx) p(\ff) = -{1 \over 2 \sigma^2} \|\yy – \AA [\xx \odot \ff]\|_2^2 – {1 \over 2 \sigma^2_f}\|\ff\|_2^2.$$
To perform the integration, let’s define $\AA \XX = \AA \diag(\xx).$ Then our integrand is $$ {1 \over 2\sigma^2}\|\yy – \AA \XX \ff\|_2^2 + {1 \over 2 \sigma^2_f}\|\ff\|_2^2.$$ Applying SVD to $\AA \XX = \UU \SS \VV^T$, define $\hh = \VV^T \ff$. The complementary component of $\ff$ will integrate to 1, so we now just have to deal with $$ {1 \over 2 \sigma^2} \|\yy – \UU \SS \hh\|_2^2 + { 1 \over 2 \sigma_f^2}\|\hh\|_2^2 ={1 \over 2 \sigma^2} \|\UU^T \yy – \SS \hh\|_2^2 + { 1 \over 2 \sigma_f^2}\|\hh\|_2^2. $$ Letting $\bb = \UU^T \yy$, we can collect terms to get $$ {1 \over 2 \sigma^2}(\bb^T \bb – 2 \bb^T \SS \hh + \hh^T\left( \SS^2 + {\sigma^2 \over \sigma_f^2}\right) \hh) ={1 \over 2 \sigma^2}(\bb^T \bb – 2 \bb^T \SS \hh + \hh^T \DD^2 \hh) .$$
Changing variables again to $\bg = \DD \hh,$, and remembering that we’ll have a $\det(\DD^{-1})$ term outside, we get $$ {1 \over 2 \sigma^2}(\bb^T\bb – 2\bb^T \SS \DD^{-1}\bg + \bg^T \bg) = {1 \over 2 \sigma^2}(\bb^T\bb – \bb^T \SS^2 \DD^{-2} \bb + \bb^T \SS^2 \DD^{-2} \bb – 2\bb^T \SS \DD^{-1}\bg + \bg^T \bg).$$
So, \begin{align*} p(\yy | \xx) \propto & \int \exp\left(-{1\over 2\sigma^2} \|\yy – \AA \XX \ff\|_2^2 + {1 \over 2\sigma_f^2} \|\ff\|_2^2\right)\; d\ff\\ = &{1 \over \det(\DD)} \int \exp\left(-{1\over 2\sigma^2} (\bb^T\bb – \bb^T \SS^2 \DD^{-2} \bb + \bb^T \SS^2 \DD^{-2} \bb – 2\bb^T \SS \DD^{-1}\bg + \bg^T \bg)\right) d\bg \\ &={1 \over \det(\DD)}\exp\left(-{1\over 2\sigma^2} (\bb^T\bb – \bb^T \SS^2 \DD^{-2} \bb\right) \int \exp\left(-{1\over 2\sigma^2} \|\bg – \DD^{-1} \SS \bb\|_2^2\right) d\bg \\ &= {\sqrt{2 \pi \sigma^2}^M \over \det(\DD)}\exp\left(-{1\over 2\sigma^2} (\bb^T\bb – \bb^T \SS^2 \DD^{-2} \bb\right). \end{align*}
Now $$ \bb^T \bb = \yy^T \UU \UU^T \yy = \yy^T \yy.$$
Also, \begin{align*} \bb^T \SS^2 \DD^{-2} \bb &= \yy^T \UU \SS^2 \DD^{-2} \UU^T \yy = \yy^T \left( {\SS^2 \over \SS^2 + {\sigma^2 \over \sigma_f^2}} \right) \yy.\\ \exp\left(-{1\over 2\sigma^2} (\bb^T\bb – \bb^T \SS^2 \DD^{-2} \bb\right) &= \exp\left(-{1 \over 2 \sigma^2}\yy^T {\sigma^2 / \sigma_f^2 \over \SS^2 + {\sigma^2 \over \sigma_f^2}} \yy\right)\end{align*}
So \begin{align*} \log p(\yy|\xx) &= – \log \det(\DD) – {1 \over 2\sigma^2_f} {\yy^T \left(\SS^2 + {\sigma^2 \over \sigma_f^2}\right)^{-1} \yy}. \\&= – \log \det(\DD) – {1 \over 2\sigma^2_f} {\yy^T\DD^{-2} \yy}.\\&= {1 \over 2}\log \det(\DD^{-2}) – {1 \over 2\sigma^2_f} {\yy^T\DD^{-2} \yy}.\end{align*}
The first term volume maximizing, while the second is an energy cost.
This seems complicated to solve for $\xx$, since the dependence of $\SS$ on $\xx$ is complicated. In particular, it seems unlikely that the bulb optimizes this express at each time step to update $\xx$…
$$ \blacksquare$$
Leave a Reply