{"id":6348,"date":"2026-03-02T14:57:16","date_gmt":"2026-03-02T14:57:16","guid":{"rendered":"https:\/\/sinatootoonian.com\/?p=6348"},"modified":"2026-03-22T12:41:15","modified_gmt":"2026-03-22T12:41:15","slug":"the-diagonal-model-with-centering","status":"publish","type":"post","link":"https:\/\/sinatootoonian.com\/index.php\/2026\/03\/02\/the-diagonal-model-with-centering\/","title":{"rendered":"The Diagonal Model with Centering"},"content":{"rendered":"\n<p>We&#8217;re going to try to make sense of the solutions to minimizing the following $$L(\\zz) = {1 \\over 2} \\|\\XX^T \\ZZ^T \\JJ \\ZZ \\XX &#8211; \\SS\\|_F^2 + {\\lambda \\over 2}\\|\\zz &#8211; \\bone\\|_2^2, \\quad \\ZZ = \\diag(\\zz), \\quad \\JJ = \\II &#8211; N^{-1} \\bone \\bone^T = \\II -\\II \\II$$<\/p>\n\n\n\n<p>The differential is $$ dL = \\tr(\\bE^T (\\XX^T d\\ZZ^T \\JJ \\ZZ \\XX + \\XX^T \\ZZ^T \\JJ d\\ZZ \\XX)) + \\lambda (\\zz &#8211; \\bone)^T d\\zz$$ so, using that $\\bE$ is symmetric, the gradient is $$ \\nabla L = 2 \\diag(\\JJ \\ZZ \\XX \\bE \\XX^T) + {\\lambda} (\\zz &#8211; \\bone).$$<\/p>\n\n\n\n<p>Expanding out the the first term, we have \\begin{align*} \\diag(\\JJ \\ZZ \\XX\\bE \\XX^T) &amp;= \\diag(\\JJ \\ZZ \\XX (\\XX^T \\ZZ^T \\JJ \\ZZ \\XX &#8211; \\SS)\\XX^T) \\\\ &amp;= \\diag(\\JJ \\ZZ \\XX \\XX^T \\ZZ^T \\JJ \\ZZ \\XX\\XX^T) &#8211; \\diag(\\JJ \\ZZ \\XX \\SS \\XX^T). \\end{align*}<\/p>\n\n\n\n<p>Expand the $\\JJ$ in the first term we get \\begin{align} \\diag((\\II &#8211; \\II\\II)\\ZZ\\XX \\XX^T \\ZZ^T (\\II &#8211; \\II \\II) \\ZZ \\XX\\XX^T) &amp;= \\diag(\\ZZ \\XX \\XX^T \\ZZ^T \\ZZ \\XX \\XX^T)\\\\ &amp;- \\diag(\\II\\II \\ZZ \\XX\\XX^T \\ZZ^T \\ZZ \\XX \\XX^T)\\\\ &amp;- \\diag(\\ZZ \\XX \\XX^T \\ZZ^T \\II \\II \\ZZ \\XX \\XX^T) \\\\ &amp;+ \\diag(\\II\\II \\ZZ \\XX \\XX^T \\ZZ^T \\II \\II \\ZZ \\XX \\XX^T).\\end{align}<\/p>\n\n\n\n<p>Because $\\ZZ$ is diagonal, the first term simplifies: \\begin{align*}\\diag(\\ZZ \\XX \\XX^T \\ZZ^T \\ZZ \\XX \\XX^T) &amp;= \\ZZ \\diag( \\XX \\XX^T \\diag(\\zz^2) \\XX \\XX^T) \\\\ &amp;= \\ZZ \\, (\\XX \\XX^T \\odot \\XX \\XX^T) \\zz^2.\\end{align*} <\/p>\n\n\n\n<p>For the second term, note that $\\diag(\\bone \\aa^T) = \\aa.$ So, \\begin{align*} \\diag(\\II\\II \\ZZ \\XX\\XX^T \\ZZ^T \\ZZ \\XX \\XX^T) &amp;= N^{-1} \\XX \\XX^T \\diag(\\zz^2) \\XX\\XX^T \\zz. \\end{align*}<\/p>\n\n\n\n<p>For the third term, note that $\\diag(\\aa \\aa^T) = \\aa^2.$ So, \\begin{align*}\\diag(\\ZZ \\XX \\XX^T \\ZZ^T \\II \\II \\ZZ \\XX \\XX^T) = N^{-1} \\ZZ (\\XX \\XX^T \\zz)^2.\\end{align*}<\/p>\n\n\n\n<p>Finally, for the fourth term, we have \\begin{align*} \\diag(\\II\\II \\ZZ \\XX \\XX^T \\ZZ^T \\II \\II \\ZZ \\XX \\XX^T) &amp;= N^{-2} \\XX \\XX^T \\zz \\zz^T \\XX \\XX^T \\zz. \\end{align*}<\/p>\n\n\n\n<p>Similarly, $$ \\diag(\\JJ \\ZZ \\XX \\SS \\XX^T) = \\ZZ \\diag(\\XX \\SS \\XX^T) &#8211; N^{-1} \\XX \\SS \\XX^T \\zz.$$<\/p>\n\n\n\n<p>Putting these all together, gives: \\begin{align*} {1 \\over 2} \\nabla L &amp;=  \\diag(\\JJ \\ZZ \\XX \\bE \\XX^T)+ {1 \\over 2} \\lambda (\\zz &#8211; 1)\\\\ &amp;= \\diag(\\JJ \\ZZ \\XX \\XX^T \\ZZ^T \\JJ \\ZZ \\XX\\XX^T) &#8211; \\diag(\\JJ \\ZZ \\XX \\SS \\XX^T) + {1 \\over 2}\\lambda (\\zz &#8211; 1)\\\\ &amp;= \\ZZ \\, (\\XX \\XX^T \\odot \\XX \\XX^T) \\zz^2 &#8211; \\ZZ \\diag(\\XX \\SS \\XX^T)\\\\&amp;- N^{-1} \\XX \\XX^T \\diag(\\zz^2) \\XX\\XX^T \\zz\\\\ &amp;- N^{-1} \\ZZ (\\XX \\XX^T \\zz)^2\\\\ &amp;+N^{-1} \\XX \\SS \\XX^T \\zz\\\\  &amp;+ {\\zz^T \\XX \\XX^T \\zz \\over N^2} \\XX \\XX^T \\zz\\\\ &amp;+ {1 \\over 2}\\lambda (\\zz &#8211; 1) . \\quad \\checkmark\\end{align*}<\/p>\n\n\n\n<p>For the value of $\\lambda$ that we&#8217;re using (around 50k), all of these components, except for the $N^{-2}$ one, are empirically quite large. So we need to incorporate all of them.<\/p>\n\n\n\n<p>Trying to solve this for $\\zz$ is hopeless, and probably wouldn&#8217;t be very meaningful anyway. The approach we took before was to look at the gradient of a single element, and split it into components from that element and all others.<\/p>\n\n\n\n<p>Let $\\CC \\triangleq \\XX \\XX^T$. Then \\begin{align*}[\\ZZ (\\CC \\odot \\CC) \\zz^2]_i &amp;= z_i \\sum_j C^2_{ij} z_j^2 =C^2_{ii} z_i^3 + z_i \\sum_{j\\neq i} C^2_{ij}z_j^2\\\\ [\\ZZ \\diag(\\XX \\SS \\XX^T)]_i &amp;= z_i \\sum_{j} X_{ij}^2 S_j.\\end{align*} <\/p>\n\n\n\n<p>Next, \\begin{align*} [\\CC \\diag(\\zz^2) \\CC \\zz]_i &amp;= \\sum_{j,k} C_{ij} z_j^2 C_{jk} z_k \\\\ &amp;= \\underbrace{C_{ii}^2 z_i^3}_{j = i = k}+\\underbrace{C_{ii} z_i^2 \\sum_{k \\neq i} C_{ik} z_k}_{j = i \\neq k} + \\underbrace{z_i\\sum_{j \\neq i} C_{ij}^2 z_j^2}_{j \\neq i = k} + \\underbrace{\\sum_{j,k \\neq i} C_{ij} C_{jk} z_j^2 z_k}_{j \\neq i \\neq k}. \\checkmark \\end{align*}<\/p>\n\n\n\n<p>Next, \\begin{align*} [\\ZZ (\\CC \\zz)^2]_i &amp;= z_i\\left[ \\sum_j C_{ij} z_j \\right]^2 = z_i (C_{ii}z_i + \\sum_{j \\neq i} C_{ij} z_j)^2 \\\\ &amp;= z_i (C_{ii}^2 z_i^2 + 2 C_{ii} z_i \\sum_{j \\neq i} C_{ij} z_j + \\sum_{j, k \\neq i} C_{ij} C_{ik} z_j z_k) \\\\ &amp;= C_{ii}^2 z_i^3 + 2 C_{ii} z_i^2 \\sum_{j \\neq i} C_{ij} z_k + z_i \\sum_{j,k \\neq i} C_{ij} C_{ik} z_j z_k. \\checkmark \\end{align*}<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>To understand how to deal with the new terms related to the mean-subtraction, let&#8217;s first look at what our original approach looked like in these terms.<\/p>\n\n\n\n<p>We didn&#8217;t try to solve these equations, rather we looked at how the values of each gain were determined by what was going on in the rest of the network.<\/p>\n\n\n\n<p>The relevant gradient equation there was $$ 0 = {1 \\over 2} [\\nabla L]_i = {\\lambda \\over 2}(z_i &#8211; 1) + C_{ii}^2 z_i^3 + z_i \\sum_{j \\neq i} C_{ij}^2 z_j^2 &#8211; z_i \\sum_\\mu X_{i\\mu}^2 s_\\mu.$$<\/p>\n\n\n\n<p>We rearranged this into the equation $$ z_i^3 =z_i {\\sum_\\mu X_{i\\mu}^2 s_\\mu &#8211; \\sum_{j \\neq i} C_{ij}^2 z_j^2 &#8211; {\\lambda \\over 2} \\over C_{ii}^2} + {\\lambda \\over 2 C_{ii}^2}.$$ <\/p>\n\n\n\n<p>We then expressed this in terms of the atomic <strong>representations<\/strong> contributed by each unit, $$ \\rr_i \\triangleq [r_{i, \\alpha \\beta}], \\quad r_{i,\\alpha\\beta} = X_{i\\alpha} X_{i\\beta}.$$ <\/p>\n\n\n\n<p>In these terms, $$C_{ii}^2 = (\\sum_\\alpha X_{i\\alpha}^2)^2 = \\sum_{\\alpha, \\beta} X_{i\\alpha}^2X_{i,\\beta}^2 = \\sum_{\\alpha,\\beta} (X_{i\\alpha} X_{i\\beta})^2 = \\sum_{\\alpha,\\beta} r_{i,\\alpha \\beta}^2 = \\|\\rr_i\\|_2^2.$$<\/p>\n\n\n\n<p>Similarly, $$ C_{ij}^2 = (\\sum_\\alpha X_{i\\alpha} X_{j\\alpha})^2 = \\sum_{\\alpha\\beta} X_{i\\alpha} X_{j\\alpha} X_{i\\beta} X_{j\\beta} = \\sum_{\\alpha\\beta} (X_{i\\alpha} X_{i\\beta})(X_{j\\alpha} X_{j\\beta}) = \\sum_{\\alpha\\beta} r_{i, \\alpha\\beta} r_{j,\\alpha\\beta} = \\rr_i^T \\rr_j.$$<\/p>\n\n\n\n<p>Finally, if we defined $\\ss = \\vec{\\diag(s_\\mu)}$, then $$\\sum_\\mu X_{i\\mu}^2 s_\\mu = \\sum_{\\mu \\nu} X_{i \\mu} X_{i\\nu} s_{\\mu\\nu} = \\sum_{\\mu \\nu} r_{i, \\mu\\nu} s_{\\mu\\nu} = \\rr_i^T \\ss.$$<\/p>\n\n\n\n<p>We could then write $$ z_i^3 = z_i {\\rr_i^T \\ss &#8211; \\sum_{j \\neq i} \\rr_i^T \\rr_j z_j^2 &#8211; {\\lambda \\over 2} \\over \\|\\rr_i\\|_2^2} + {\\lambda \\over 2 \\|\\rr_i\\|_2^2}.$$<\/p>\n\n\n\n<p>Now $$ \\ss_i \\triangleq \\sum_{j \\neq i} \\rr_j z_j^2$$ was the prediction contributed by all other units. So, defining $$ \\lambda_i \\triangleq {\\lambda \\over 2 \\|\\rr_i\\|_2^2},$$ we arrived at $$ z_i^3 = z_i {(\\ss -\\ss_i)^T \\hat \\rr_i &#8211; \\lambda_i   \\over \\|\\rr_i\\|} + {\\lambda_i \\over \\|\\rr_i\\|}.$$<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>Let&#8217;s now see if we can express the new terms in the same way.<\/p>\n\n\n\n<p>The first one is, $$C_{ii} C_{ik} = (\\sum_\\alpha X_{i\\alpha}^2) (\\sum_\\beta X_{i\\beta} X_{k\\beta}) = \\sum_{\\alpha \\beta} X_{i\\alpha}^2 X_{i\\beta} X_{k\\beta} = \\sum_{\\alpha \\beta} (X_{i\\alpha} X_{i\\beta}) (X_{i\\alpha} X_{k \\beta}) \\triangleq \\rr_i^T \\rr_{ik},$$ where we&#8217;ve implicitly defined the atom of the cross-represntation as $\\rr_{ik}$<\/p>\n\n\n\n<p>We also have $$ C_{ik} C_{jk} = (\\sum_\\alpha X_{i\\alpha} X_{k \\alpha}) (\\sum_\\beta X_{j \\beta} X_{k\\beta}) = \\sum_{\\alpha \\beta} X_{i\\alpha} X_{k\\alpha} X_{j\\beta} X_{k\\beta} = \\sum_{\\alpha \\beta} (X_{i\\alpha} X_{j\\beta}) (X_{k\\alpha}  X_{k\\beta}) = \\rr_{ij}^T \\rr_{k}.$$<\/p>\n\n\n\n<p>Finally $$ C_{ij} C_{jk} = (\\sum_\\alpha X_{i\\alpha} X_{j \\alpha}) (\\sum_\\beta X_{j \\beta} X_{k\\beta}) = \\sum_{\\alpha \\beta} X_{i\\alpha} X_{j \\beta} X_{j\\alpha} X_{k\\beta} = \\sum_{\\alpha \\beta}  (X_{i\\alpha} X_{j \\beta}) (X_{j\\alpha} X_{k\\beta}) = \\rr_{ij}^T \\rr_{jk} = \\rr_{ik}^T \\rr_j.$$<\/p>\n\n\n\n<p>In general, $$ C_{ij} C_{mn} = \\rr_{im}^T \\rr_{jn} = \\rr_{in}^T \\rr_{jm}.$$<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>With these definitions in hand, we can return to our original expressions. <\/p>\n\n\n\n<p>The first is \\begin{align*} [\\CC \\diag(\\zz^2) \\CC \\zz]_i &amp;= C_{ii}^2 z_i^3 + C_{ii} z_i^2 \\sum_{k \\neq i} C_{ik} z_k + z_i\\sum_{j \\neq i} C_{ij}^2 z_j^2 + \\sum_{j,k \\neq i} C_{ij} C_{jk} z_j^2 z_k \\\\ &amp;= \\rr_i^T \\rr_i z_i^3 + z_i^2 \\sum_{k \\neq i} z_k \\rr_i^T \\rr_{ik} + z_i \\sum_{j \\neq i} \\rr_i^T \\rr_j z_j^2 + \\sum_{j,k \\neq i} \\rr_{ij}^T \\rr_{jk} z_j^2 z_k\\\\ &amp;= \\|\\rr_i\\|_2^2 z_i^3 + z_i^2 \\rr_i^T \\sum_{j \\neq i} \\rr_{ij}  z_j + z_i \\rr_i^T \\sum_{j \\neq i} \\rr_j z_j^2 + \\sum_{j,k \\neq i} \\rr_{ij}^T \\rr_{jk} z_j^2 z_k. \\checkmark\\end{align*}<\/p>\n\n\n\n<p>The second is \\begin{align*} [\\ZZ (\\CC \\zz)^2]_i &amp;= C_{ii}^2 z_i^3 + 2 C_{ii} z_i^2 \\sum_{j \\neq i} C_{ij} z_j + z_i \\sum_{j,k \\neq i} C_{ij} C_{ik} z_j z_k\\\\ &amp;=\\|\\rr_i\\|_2^2 z_i^3 + 2 z_i^2 \\rr_i^T \\sum_{j \\neq i}  \\rr_{ij} z_j + z_i  \\rr_i^T\\sum_{j,k\\neq i} \\rr_{jk} z_j z_k. \\checkmark \\end{align*}<\/p>\n\n\n\n<p>We also have \\begin{align*}[\\XX \\SS \\XX^T \\zz]_i &amp;= \\sum_{\\alpha,j} X_{i\\alpha} s_\\alpha X_{j \\alpha} z_j = \\sum_{\\alpha \\beta, j} X_{i \\alpha} X_{j \\beta} s_{\\alpha \\beta} z_j = \\sum_{\\alpha\\beta,j} r_{ij, \\alpha \\beta} s_{\\alpha \\beta} z_j = \\ss^T \\sum_j \\rr_{ij} z_j.\\checkmark\\end{align*}<\/p>\n\n\n\n<p>It may also be useful to see what the prediction is in these terms. \\begin{align*} [\\XX^T \\ZZ^T \\JJ \\ZZ \\XX]_{\\alpha\\beta} &amp;= [\\XX^T \\ZZ^2 \\XX &#8211; N^{-1}\\XX^T \\zz \\zz^T \\XX]_{\\alpha\\beta} \\\\ &amp;= \\sum_i X_{i \\alpha} z^2_i X_{i \\beta} &#8211; N^{-1}(\\sum_i X_{i \\alpha} z_i )(\\sum_i X_{i\\beta} z_i) \\\\ &amp;=  \\sum_i X_{i \\alpha} X_{i \\beta}z^2_i &#8211; N^{-1} \\sum_{ij} X_{i \\alpha} X_{j \\beta} z_i z_j \\\\ &amp;= \\left[\\sum_i \\rr_i z_i^2 &#8211; {1 \\over N} \\sum_{ij} \\rr_{ij} z_i z_j\\right]_{\\alpha \\beta}.\\checkmark\\end{align*} <\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>Let&#8217;s now tabulate the terms: $$\\begin{array}{|c|c|c|} \\hline \\textbf{Term} &amp; \\textbf{Expression} \\\\  \\hline \\ZZ (\\CC \\odot \\CC) \\zz^2 &amp; \\|\\rr_i\\|_2^2 z_i^3 + z_i \\sum_j \\rr_i^T \\rr_j z_j^2 \\\\ -\\ZZ \\diag(\\XX \\SS \\XX^T) &amp; -z_i \\rr_i^T \\ss  \\\\ -{1 \\over N} \\CC\\diag(\\zz^2) \\CC^T \\zz &amp; -{1 \\over N} (\\|\\rr_i\\|_2^2 z_i^3 + z_i^2 \\rr_i^T \\sum_{j \\neq i} \\rr_{ij} z_j + z_i \\rr_i^T \\sum_{j \\neq i} \\rr_j z_j^2 + \\sum_{j,k \\neq i} \\rr_{ij}^T \\rr_{jk} z_j^2 z_k) \\\\ -{1 \\over N} \\ZZ (\\CC \\zz)^2 &amp; &#8211; {1 \\over N} ( \\|\\rr_i\\|_2^2 z_i^3 + 2 z_i^2 \\rr_i^T \\sum_{j \\neq i} \\rr_{ij} z_j + z_i \\rr_i^T \\sum_{j,k\\neq i} \\rr_{jk} z_j z_k)\\\\ {1 \\over N} \\XX \\SS \\XX^T \\zz &amp; { 1 \\over N} \\ss^T \\sum_j \\rr_{ij} z_j \\\\ {\\zz^T \\CC \\zz \\over N^2} \\CC \\zz &amp; \\approx 0 \\\\ {1 \\over 2} \\lambda(\\zz &#8211; 1) &amp; \\\\ \\hline \\end{array} .$$<\/p>\n\n\n\n<p>We can plot the absolute value of each component of the gradient,<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"420\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/03\/image-1-1024x420.png\" alt=\"\" class=\"wp-image-6953\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/03\/image-1-1024x420.png 1024w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/03\/image-1-300x123.png 300w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/03\/image-1-768x315.png 768w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/03\/image-1-1536x629.png 1536w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/03\/image-1.png 1562w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>The big contributions due to averaging come from the orange term and the red term. It&#8217;s curious that the green term is so much smaller than the red one, even though they&#8217;re so similar in their constitutions.<\/p>\n\n\n\n<p>The orange term is $$ [\\CC [\\zz^2] \\CC \\zz]_i = \\sum_{jk} C_{ij} C_{jk} z_j^2 z_k = \\sum_{jk} \\rr_{ij}^T \\rr_{jk} z_j^2 z_k.$$<\/p>\n\n\n\n<p>We can split this up further into: \\begin{align*} \\rr_{ii}^T \\rr_{ii} z_i^3 =&amp; \\|\\rr_i\\|_2^2 z_i^3 &amp; (j = k = i) \\\\ &amp;\\sum_{\\substack{k \\neq j \\\\ j = i}} \\rr_{ii}^T \\rr_{ik} z_i^2 z_k &amp; (j = i \\neq k) \\\\ \\sum_j \\rr_{ij}^T \\rr_{ji} z_j^2 z_i = &amp;\\sum_{\\substack{j \\neq k\\\\ k = i}} \\rr_{ij}^T \\rr_{ji} z_j^2 z_i &amp; (j \\neq k = i )\\\\ &amp; \\sum_{j = k \\neq i} \\rr_{ij}^T \\rr_{jj} z_j^3 &amp; (j = k \\neq i)\\\\ &amp; \\sum_{\\substack{j \\neq k\\\\ j \\neq i\\\\ k \\neq i}} \\rr_{ij}^T \\rr_{jk} z_j^2 z_k &amp; (j \\neq k, j\\neq i) \\end{align*}<\/p>\n\n\n\n<p>Most of the contribution is coming from the last term:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"721\" height=\"462\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/03\/image-3.png\" alt=\"\" class=\"wp-image-6997\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/03\/image-3.png 721w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/03\/image-3-300x192.png 300w\" sizes=\"auto, (max-width: 721px) 100vw, 721px\" \/><\/figure>\n\n\n\n<p>The last two terms are the ones that don&#8217;t involve $i$, and combining them accounts for most of the orange term.<\/p>\n\n\n\n<p>But the plot above is the relative contributions to each term. In terms of the absolute level of variation, there is a lot:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"684\" height=\"461\" src=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/03\/image-4.png\" alt=\"\" class=\"wp-image-6999\" srcset=\"https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/03\/image-4.png 684w, https:\/\/sinatootoonian.com\/wp-content\/uploads\/2026\/03\/image-4-300x202.png 300w\" sizes=\"auto, (max-width: 684px) 100vw, 684px\" \/><\/figure>\n\n\n\n<p>Hmm&#8230;<\/p>\n\n\n\n<p>We can also collect the gradient terms into various groups: \\begin{align} {1 \\over 2} [\\nabla L]_i &amp;= (1 &#8211; {2 \\over N}) \\|\\rr_i\\|_2^2 z_i^3\\\\ &amp;-\\ss^T (\\rr_i z_i &#8211; {1 \\over N} \\sum_j \\rr_{ij} z_j) \\\\ &amp;+z_i \\rr_i^T (\\RR \\zz^2 &#8211; {1 \\over N}\\RR_{\/i} \\zz_{\/i}^2) \\\\ &amp;-{1 \\over N} \\sum_{j,k\\neq i}(z_i \\rr_{ii} + z_j \\rr_{ij})^T \\rr_{jk} z_j z_k\\\\ &amp;+{1\\over 2} \\lambda (\\zz &#8211; 1). \\end{align}<\/p>\n\n\n\n<p>I wonder if organizing into representations like I have above is the right way to go. Perhaps there&#8217;s a more appropriate coordinate system that will help collect the variability?<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>We&#8217;re going to try to make sense of the solutions to minimizing the following $$L(\\zz) = {1 \\over 2} \\|\\XX^T \\ZZ^T \\JJ \\ZZ \\XX &#8211; \\SS\\|_F^2 + {\\lambda \\over 2}\\|\\zz &#8211; \\bone\\|_2^2$$<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1,148],"tags":[],"class_list":["post-6348","post","type-post","status-publish","format-standard","hentry","category-blog","category-research"],"acf":[],"_links":{"self":[{"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/posts\/6348","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/comments?post=6348"}],"version-history":[{"count":262,"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/posts\/6348\/revisions"}],"predecessor-version":[{"id":7005,"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/posts\/6348\/revisions\/7005"}],"wp:attachment":[{"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/media?parent=6348"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/categories?post=6348"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sinatootoonian.com\/index.php\/wp-json\/wp\/v2\/tags?post=6348"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}