Currently studying Karpathy's LLM Wiki (https://gist.github.com/karpathy/442a6bf555914893e9891c11519de94f)—this is truly a brilliant design philosophy.
I have long maintained that traditional RAG solutions are overly cumbersome and often yield suboptimal results, as local vector databases are inherently limited in capturing deep semantic logic. In contrast, multi-layered, structured document hierarchies are far more compatible with an LLM's reasoning patterns. I believe treating an AI merely as a summarization engine is a significant underutilization of its intelligence. Instead of feeding the model fragmented snippets, we should expose the underlying structure and navigational paths of the data. By empowering the AI to autonomously decide its retrieval strategy—acting as a navigator within a structural map—we can fully unlock its potential for high-order reasoning and precision.
Keep it simple, reject the expansion