Category: JournalClub
-
Linking Representational Geometry and Neural Function
These are my notes on Harvey et al. “What represenational similarity measures imply about decodable information.”
-
Memory erasure by dopamine-gated retrospective learning
Hastily written notes immediately after the Gatsby TNJC where this preprint was presented.
-
Notes on Toy Models of Superposition
On the Discord we’ve been discussing “Toy Models of Superposition” form Anthropic. It’s a long blog post, so these are my running notes to get people (and myself) up to speed if they’ve missed a week or two of the discussion. As I’ve started these notes midway through the discussion, I’ll start on the latest…
-
Dimension reduction of vector fields
I discuss two notions of dimension-reduction of vector fields from the “low-rank hypothesis” paper, and which might be the ‘correct’ one.
-
The low-rank hypothesis of complex systems
In this post I will summarize the paper “The low-rank hypothesis of complex systems” by Thibeault et al.
-
Computing with Line Attractors
These notes are based on Seung’s “How the Brain Keeps the Eyes Still”, where he discusses how a line attractor network may implement a memory of the desired fixation angle that ultimately drives the muscles in the eye.
-
RL produces more brain-like representations for motor learning than supervised learning
These are my rapidly scribbled notes on Codol et al.’s “Brain-like neural dynamics for behavioral control develop through reinforcement learning” (and likely contain errors). What learning algorithm does the baby’s brain use to learn motor tasks? We have at least two candidates: supervised learning (SL), which measures and minimizes discrepancies between desired and actual states…
-
Between geometry and topology
At one of the journal clubs I recently attended, we discussed “The Topology and Geometry of Neural Representations”. The motivation for the paper is that procedures like RSA, which capture the overlap of population representations of different stimuli, can be overly sensitive to some geometrical features of the representation the brain might not care about.…
-
How many neurons or trials to recover signal geometry?
This my transcription of notes on a VVTNS talk by Itamar Landau about recovering the geometry of high-dimensional neural signals corrupted by noise. Caveat emptor: These notes are based on what I remember or hastily wrote down during the presentation, so they likely contain errors and omissions. Motivation The broad question is then: Under what…