{"id":7703,"date":"2026-04-21T21:28:51","date_gmt":"2026-04-21T20:28:51","guid":{"rendered":"https:\/\/sinatootoonian.com\/?p=7703"},"modified":"2026-04-21T21:41:36","modified_gmt":"2026-04-21T20:41:36","slug":"a-shared-code-for-perception-and-imagery-in-ventral-temporal-cortex","status":"publish","type":"post","link":"https:\/\/sinatootoonian.com\/index.php\/2026\/04\/21\/a-shared-code-for-perception-and-imagery-in-ventral-temporal-cortex\/","title":{"rendered":"A shared code for perception and imagery in ventral temporal cortex"},"content":{"rendered":"\n<p><em>This is a brief summary of the <a href=\"https:\/\/www.science.org\/doi\/10.1126\/science.adt8343\">paper<\/a> by Wadia at el. from the Rutishauser and Tsao labs that I presented at Gatsby TNJC. My slides are <a href=\"https:\/\/docs.google.com\/presentation\/d\/1MfZ58YQcb2kyNy1qWQzEXYkYKrOEp01d6ssvy2eTUPI\/edit?usp=sharing\">here<\/a><\/em>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"151\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-13-1024x151.png\" alt=\"\" class=\"wp-image-7717\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-13-1024x151.png 1024w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-13-300x44.png 300w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-13-768x114.png 768w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-13.png 1177w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Does the same circuitry get activated when we view an object as when we imagine it? Does the human visual system contain a generative model for images? In this paper, the authors use single unit recordings from human ventral temporal cortex (VTC) to find out!<\/p>\n\n\n\n<p>The VTC is an area deep in the visual stream that is involved in processing faces and objects. Epilepsy patients can have electrodes implanted in this area to monitor their condition in hospital. Such monitoring is quite boring and patients are often willing to participate in neuroscience experiments. This gives neuroscientists a unique opportunity to record from an awake human brain while interacting with it, allowing experiments that would otherwise be impossible.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"828\" height=\"197\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-3.png\" alt=\"\" class=\"wp-image-7704\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-3.png 828w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-3-300x71.png 300w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-3-768x183.png 768w\" sizes=\"auto, (max-width: 828px) 100vw, 828px\" \/><\/figure>\n\n\n\n<p>In the present set of experiments, the authors had ~60 participants look at 500 different images while neural activity in VTC was recorded. The collected the responses of a total of ~750 neurons over 16 sessions. The authors found that the majority of neurons in the VTC are selective to visual categories.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"617\" height=\"570\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-5.png\" alt=\"\" class=\"wp-image-7707\" style=\"aspect-ratio:1.0824287280701754;width:409px;height:auto\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-5.png 617w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-5-300x277.png 300w\" sizes=\"auto, (max-width: 617px) 100vw, 617px\" \/><\/figure>\n\n\n\n<p>Interestingly, when they examined the way their neurons responded to images, they found that 80% used an &#8220;axis&#8221;-code, a linear weighting of high-level image features. That is, $$ r|\\ff \\approx \\cc_\\text{pref}^T \\ff + c_0,$$ where $\\ff$ are the features of an image, and the coefficients are learned per neuron by least squares. <\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"823\" height=\"724\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-14.png\" alt=\"\" class=\"wp-image-7721\" style=\"aspect-ratio:1.1367673179396092;width:384px;height:auto\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-14.png 823w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-14-300x264.png 300w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-14-768x676.png 768w\" sizes=\"auto, (max-width: 823px) 100vw, 823px\" \/><\/figure>\n\n\n\n<p>They generated the basis features from the first fully connected layer of AlexNet, though the results generalized to other deep nets for visual processing. Their axis code beat category coding and exemplar coding, two other plausible models of neural responses, in explaining the neural responses. <\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"471\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-4-1024x471.png\" alt=\"\" class=\"wp-image-7705\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-4-1024x471.png 1024w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-4-300x138.png 300w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-4-768x353.png 768w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-4.png 1111w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Consistent with a linear code, they were able to find a linear decoder of features from responses, and thereby reconstruct the images that participants were viewing:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"781\" height=\"485\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-9.png\" alt=\"\" class=\"wp-image-7712\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-9.png 781w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-9-300x186.png 300w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-9-768x477.png 768w\" sizes=\"auto, (max-width: 781px) 100vw, 781px\" \/><\/figure>\n\n\n\n<p>In an interesting additional test of the model, they used a GAN trained to produce images from features to generate new images that were farther along the preferred and orthogonal axes than present in their model, and found that responses along those directions behaved as their model predicted.<\/p>\n\n\n\n<p>They found that their linear codes best explained the neural responses when they were based on visual features rather than those built on higher level semantic features, for example from later fully-connected layers in AlexNet, or features built from word embeddings:<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"369\" height=\"370\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-12.png\" alt=\"\" class=\"wp-image-7715\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-12.png 369w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-12-300x300.png 300w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-12-150x150.png 150w\" sizes=\"auto, (max-width: 369px) 100vw, 369px\" \/><\/figure>\n\n\n\n<p>Having established the axis code during the visual perception task, they had a subset of their patients participate in an imagery task, in which they first viewed two of the images in the dataset, then, after a pause, were asked to imagine those images.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"376\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-7-1024x376.png\" alt=\"\" class=\"wp-image-7710\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-7-1024x376.png 1024w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-7-300x110.png 300w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-7-768x282.png 768w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-7.png 1106w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Interestingly, they found ~40% of these neurons respond when participants later imagined the images. Notice how the response of the neuron below to the piano image (orange trace) is high in both the perception (left) and imagery (right) conditions:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"717\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-10-1024x717.png\" alt=\"\" class=\"wp-image-7713\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-10-1024x717.png 1024w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-10-300x210.png 300w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-10-768x538.png 768w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-10.png 1077w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>The reactivated neurons used a shared code during perception and imagery in two ways. First, decoders signaling which image was present based on the responses could be trained on the perceptual data, and successfully decode the imagery data, indicating the neurons were responding in similar ways in the two conditions:<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"441\" height=\"534\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-8.png\" alt=\"\" class=\"wp-image-7711\" style=\"aspect-ratio:0.8258376005852232;width:277px;height:auto\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-8.png 441w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-8-248x300.png 248w\" sizes=\"auto, (max-width: 441px) 100vw, 441px\" \/><\/figure>\n\n\n\n<p> Furthermore, they found the responses in imagery session were correlated with the predictions using the preferred direction found in the perception image, and much less correlated with the orthogonal direction found in those sessions:<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"646\" height=\"581\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-11.png\" alt=\"\" class=\"wp-image-7714\" style=\"aspect-ratio:1.1118790845278756;width:429px;height:auto\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-11.png 646w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/04\/image-11-300x270.png 300w\" sizes=\"auto, (max-width: 646px) 100vw, 646px\" \/><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Summary<\/h2>\n\n\n\n<p>In summary, by recording in humans the authors were able compare neural representations during perception and imagination. They found that neurons in VTC respond to images with a linear &#8220;axis&#8221; code built on high level, but not semantic, image features, and that a subset of these neurons reactivate during imagery and respond the same way. This shared code of neurons active both during perception and recall would support a generative model for images in the visual system, though much more work needs to be done to establish this, such as direct manipulations of neurons to activate causes while simultaneously recording the effects much farther downstream in the early visual system where the ultimate output of generation, visual imagery, would be.<\/p>\n\n\n\n<p>$$\\blacksquare$$ <\/p>\n","protected":false},"excerpt":{"rendered":"<p>Some of the highlights of the Science paper by Wadia et al. from the Rutishauser and Tsao labs comparing ventral temporal cortex represents perceived vs imagined images.<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1,149],"tags":[],"class_list":["post-7703","post","type-post","status-publish","format-standard","hentry","category-blog","category-journalclub"],"acf":[],"_links":{"self":[{"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/posts\/7703","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/comments?post=7703"}],"version-history":[{"count":7,"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/posts\/7703\/revisions"}],"predecessor-version":[{"id":7724,"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/posts\/7703\/revisions\/7724"}],"wp:attachment":[{"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/media?parent=7703"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/categories?post=7703"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/tags?post=7703"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}