Tag: work

  • A Free Connectivity Non-Solution

    In this post I explore one possible unconstrained connectivity solution that turns out to not work. As before, the loss function we’re optimizing is$$ L(\ZZ) = {1 \over 2} \|\XX^T \ZZ^T \ZZ \XX – \CC\|_F^2 + {\lambda \over 2 }\|\ZZ – \II\|_F^2.$$ The gradient above is $$ \nabla_\ZZ L = \ZZ (2 \XX \bE \XX^T)…

  • The Logic of Free Connectivity

    When we fit connectivity to convert the input representations to the output representations, and without constraining the connectivity except for regularizing its implied feedforward circuit to be approximately the identity, we find, first, that we can fit the representation data reasonably well. Below I show the observed representation of some held-out data on the left,…

  • Quantization

    We’re trying to understand the solutions when minimizing $$L(\zz) = {1 \over 2 M^2} \|\XX^T \ZZ^2 \XX – \SS\|_F^2 + {\lambda \over 2 N}\|\zz – \bone\|_2^2.$$ We can write this as $$L(\zz) = {1 \over 2 M^2} \|\RR \zz^2 – \ss \|_2^2 + {\lambda \over 2 N}\|\zz – \bone\|_2^2,$$ where the $i$’th column of $\RR$…

  • Decorrelation through gain control

    Decorrelation is typically thought to require lateral interactions. But how much can we gain just by gain control? The setting as usual is $N$-dimensional glomerular inputs $\xx$, driving projection neuron activity according to $\dot \yy \propto – \sigma^2 \yy + \xx – \WW \yy$, which at steady state gives an input output transformation $$(\II \sigma^2…

  • Inference by decorrelation

    We frequently observe decorrelation in projection neuron responses. This has often been linked to either redundancy reduction, or pattern separation. Can we make an explicit link to inference? A simple case to consider is $\ell_2$ regularized MAP inference, where $$ \log p(x|y) = L(x,y) = {1 \over 2\sigma^2} \|y – A x\|_2^2 + {\gamma \over…

  • Changing regularization, II

    Today I went back to trying to understand the solution when using the original regularization. While doing so it occurred to me that if I use a slightly different regularization, I can get a closed-form solution for the feedforward connectivity $Z$, and without most (though not all) of the problems I was having in my…

  • Changing regularization

    This morning it occurred to me that the problems we’re having with our equation \begin{align}S^2 Z^2 S^2 – S C S = \lambda (Z^{-1} – I)\label{main}\tag{1}\end{align} are due to the regularizer we use, $\|Z – I\|_F^2$. This regularizer makes the default behavior of the feedforward connections passing the input directly to the output. But it’s…

  • Wrangling quartics, V

    Yesterday I went to discuss the problem with one of my colleagues. He had the interesting idea of modelling $S$, and especially $S^2$, as low rank, in particular as $S = s_1 e_1 e_1^T$. That is, shifting the focus on $S$ from $Z$. I tried this out today, and although it didn’t quite pan out,…

  • Wrangling quartics, IV

    I’m trying to make some sense of $$ {1 \over \la’} \left(S^2 \wt Z_{UU} S^2 – S \wt C_{VV} S\right) + I = \wt Z_{UU}^{-1}. \label{start}\tag{1}$$ Below I’m going to drop all the tildes and subscripts, for clarity. If we left multiply by $Z$ we get $$ {1 \over \la’} Z(S^2 Z^2 S^2 – S…

  • Wrangling quartics, III

    We are trying to understand the connectivity solutions $Z$ found when minimizing the objective $$ {1 \over 2 n^2 } \|X^T Z^T Z X – C\|_F^2 + {\la \over 2 m^2}\|Z – I\|_F^2.$$ Recap We found in the previous post that solutions satisfy$$ {1 \over \la’} \left(S^2 \wt Z_{UU}^2 S^2 – S \wt C_{VV} S…

  • Wrangling quartics, II

    In the last post on this topic, we saw that when optimizing the objective$$ {1 \over 2 n^2 } \|X^T Z^T Z X – C\|_F^2 + {\la \over 2 m^2}\|Z – I\|_F^2,$$ any solution $Z$ is symmetric and satisfies $${2 \over n^2} \left( XX^T Z^2 XX^T – X C X^T\right) + {\la \over m^2} I…