<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="/rss.xsl" type="text/xsl"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Context | Steven’s Diary | 为0700.hk卖命中</title><description>如你所见这只是一个日记本而已#赛博鸡蛋 一些羊毛#光与影 摄影作品或创作过程记录#吃点好 吃啥了#面基#活动#灵感菇 一些 MVP 的 demo 或 idea 记录点击讨论区进群友情链接：@stvgateway 联系Steven：@stvlynn_bot</description><link>https://diary.stv.pm</link><item><title>#优质博文 #AI #LLM #ContextHow AI Remembers and Why It Forgets: Part 1. The Context Problem：介绍大语言模型（LLM）如何通过上下文（Context）模拟记忆，以及为何大量信息会导致“上下文腐烂”（Context Rot）现象</title><link>https://diary.stv.pm/posts/4898</link><guid isPermaLink="true">https://diary.stv.pm/posts/4898</guid><pubDate>Tue, 07 Apr 2026 15:20:02 GMT</pubDate><content:encoded>&lt;a href=&quot;/search/%23%E4%BC%98%E8%B4%A8%E5%8D%9A%E6%96%87&quot;&gt;#优质博文&lt;/a&gt; &lt;a href=&quot;/search/%23AI&quot;&gt;#AI&lt;/a&gt; &lt;a href=&quot;/search/%23LLM&quot;&gt;#LLM&lt;/a&gt; &lt;a href=&quot;/search/%23Context&quot;&gt;#Context&lt;/a&gt;&lt;br /&gt;&lt;a href=&quot;https://www.developerway.com/posts/how-ai-remembers-and-forgets-part1&quot; target=&quot;_blank&quot;&gt;How AI Remembers and Why It Forgets: Part 1. The &lt;mark&gt;Context&lt;/mark&gt; Problem&lt;/a&gt;：介绍大语言模型（LLM）如何通过上下文（&lt;mark&gt;Context&lt;/mark&gt;）模拟记忆，以及为何大量信息会导致“上下文腐烂”（&lt;mark&gt;Context&lt;/mark&gt; Rot）现象。&lt;br /&gt;&lt;br /&gt;&lt;blockquote&gt;AI 摘要：文章揭示了 AI 并没有持久记忆，其所谓的“记忆”完全依赖于对话中反复发送的上下文（&lt;mark&gt;Context&lt;/mark&gt;）。作者通过实验证明，即使在模型宣称的上下文窗口（&lt;mark&gt;Context&lt;/mark&gt; Window）限制内，大量信息也会导致“上下文腐烂”（&lt;mark&gt;Context&lt;/mark&gt; Rot），引发性能下降、信息遗漏及幻觉。&lt;/blockquote&gt;&lt;br /&gt;&lt;br /&gt;&lt;i&gt;author Nadia Makarevich&lt;/i&gt;&lt;a href=&quot;https://www.developerway.com/posts/how-ai-remembers-and-forgets-part1&quot; target=&quot;_blank&quot;&gt;
  
  &lt;div&gt;Developerway&lt;/div&gt;
  &lt;img class=&quot;link_preview_image&quot; alt=&quot;How AI Remembers and Why It Forgets: Part 1. The Context Problem&quot; src=&quot;/static/https://cdn4.telesco.pe/file/ZbcZcxMVE0_xRTeE6m8FnJ39vm3L162-SWekSg9wIWnjS3f2NHpU8rM0nZnCvIf2pmzzqo83dQPqE5xsuCqP6J8fLa-h5OGJRhmbTg_4UIg_FXVIuXv3X6dPAxQEFWkQ8KuEKbqKGfc8lXGk1l-xji8eFLhEzGmES3WrC3cqVhFaghQGMnMxuEZlplRBc0Ga0BABQgBVcj9vYzy_WUBQICspVwfNa-Yc0CILq5ivBgyVwiPbWuYW28zZwZlkslndkz0yJ3BgXDxwZq5-sMUi4ey-fvBT4kFOF-2_hd_gl9yfnaDZb4i3tfwOT2xsV7JfHaXcndVoD-pBfRYLGAK4sA.jpg&quot; loading=&quot;lazy&quot; /&gt;
  &lt;div&gt;How AI Remembers and Why It Forgets: Part 1. The &lt;mark&gt;Context&lt;/mark&gt; Problem&lt;/div&gt;
  &lt;div&gt;How does AI actually remember things between messages, and why does it forget halfway through? I ran a few experiments on Claude Sonnet and GPT-5 and wrote down what I saw.&lt;/div&gt;
&lt;/a&gt;</content:encoded></item></channel></rss>