Discussion about this post

User's avatar
akash's avatar

First, a minor correction:

> "3. RLHF: first proposed (to my knowledge) in the InstructGPT paper from OpenAI in 2022"

Deep reinforcement learning from human preferences by Christiano et al. (2017) is the foundational paper on RLHF. Link: https://arxiv.org/abs/1706.03741

Interesting perspective, and I do like the bigger question you are asking: what ended up mattering the most for the success of LLMs? Some quick thoughts and questions:

- I do think building GPT-3-like system was certainly feasible in the 90s *if* we had the computing capacity back then (Gwern has a nice historical exposition on Moravec's predictions which I recommend: https://gwern.net/scaling-hypothesis)

- I am not unsure convinced that just unlocking YT data would be the next big thing (for AGI, and I know you don't like AGI talk ... sorry). There is some evidence that suggests that the models are still not generalizing, but instead, defaulting to bag-of-heuristics and other poor learning strategies (https://www.lesswrong.com/posts/gcpNuEZnxAPayaKBY/othellogpt-learned-a-bag-of-heuristics-1). Assuming this is true, I would expect that a YT-data-trained-LLM will appear much smarter, crush the benchmarks, have a better understanding of the world, but may not be transformative. Massively uncertain about this point, though.

- "perhaps we would’ve settled with LSTMs or SSMs" — are there any examples of LSTM-driven language models that are comparable to Transformer-based LLMs?

- Relatedly, I think the importance of adaptive optimizers is being under-emphasized here. Without Adam, wouldn't LLM training be >2x more expensive and time-taking?

Expand full comment
Julie By Default's avatar

I loved this. Yes. Exactly. People talk about AI like it’s inventing things — but this cuts right through that. Most of what we call “generation” is really just recombination, powered by increasingly structured inputs — from us. The breakthroughs weren’t big ideas; they were new ways to learn from new kinds of data.

That’s what makes this piece so sharp: it’s not dismissive of research, just honest about where progress actually comes from. Not magic. Not models. Infrastructure. Access. The moment a new dataset becomes legible at scale, everything shifts — and we call it innovation.

And it’s not just AI. In product too, the surface gets all the credit, but the real leverage sits underneath — in what’s visible, counted, or quietly baked into the defaults.

Expand full comment
4 more comments...

No posts