Curriculum
Six months of carefully curated readings that build on each other — from understanding the AI landscape to leading your team through it. New months unlock as the program progresses.
The Landscape
Understanding where we are
We begin with two foundational perspectives: a legendary AI researcher's hard-won lesson about what actually works in AI, and a practical look at why management is about to become a superpower in an AI-powered world.
The Bitter Lesson
Rich Sutton
A brief, paradigm-shifting essay from one of AI's founding researchers. Sutton argues that the biggest lesson from 70 years of AI research is that general methods leveraging computation win over human-designed approaches — every single time.
Why Management Is Becoming an AI Superpower
Ethan Mollick
Mollick makes the case that managers — not engineers — may be the biggest beneficiaries of AI. The skills that matter most (delegation, quality control, orchestration) are exactly what AI demands.
The Vision
What the optimists see
Two of the most ambitious thinkers in AI lay out their visions for what this technology could become — from curing diseases to solving climate change. These are the bull cases, presented with intellectual rigor.
Machines of Loving Grace
Dario Amodei
The CEO of Anthropic paints a detailed picture of how AI could transform biology, neuroscience, economic development, and governance — if we get the deployment right. A rare optimistic essay from someone deeply concerned about risks.
Nobel Prize Lecture: AI for Scientific Discovery
Demis Hassabis
DeepMind's co-founder and CEO delivers his Nobel Prize lecture on AlphaFold and the future of AI-driven science. A masterclass in how AI is already revolutionizing our understanding of biology.
The Adolescence of Technology
Dario Amodei
Anthropic's CEO examines the civilizational risks of powerful AI — from autonomy failures to misuse for destruction and authoritarianism. Rather than doomerism, Amodei advocates measured, evidence-based defenses including constitutional AI, interpretability research, and strategic government intervention.
The Risks
What could go wrong
Before we can evaluate AI's risks, we need to understand the architecture that made it all possible — and then confront the critiques of what it's become. This month pairs the foundational paper behind every modern AI system with a landmark critique of where it all went wrong.
Attention Is All You Need
Vaswani, Shazeer, Parmar, et al.
The 2017 paper that started it all. A team at Google introduced the Transformer — the architecture behind GPT, Claude, and every major AI system today. You don't need to understand the math to grasp the key insight: by letting a model attend to all parts of an input at once, they unlocked a new era of AI capability.
On the Dangers of Stochastic Parrots
Emily Bender, Timnit Gebru, et al.
The paper that shook Google and became a touchstone for AI ethics. Bender and Gebru argue that large language models carry environmental costs, encode biases, and create an illusion of understanding that can cause real harm.
The Debate
Competing worldviews
The Practice
Making it real
The End & The Beginning
What we've learned, where we're going