Tag: RAG
Table lookups again, with Engram
2026-03-30 DeepSeek MoE RAG text Popular techniques in language modelling, including RAG, MoE, and attention itself, amount to replacing as much as possible of a neural network model with different kinds of table lookups. In this recent paper from DeepSeek's research group, they attempt another such replacement: shifting factual knowledge out of the model weights as such, into a separate hash table. Access: $ Basic
Generate and read: Oh no they didn't
2025-05-21 prompting text GPT RAG hallucination What if instead of looking up facts in Wikipedia, you just used a language model to generate fake Wikipedia articles? Access: Public