Lessons from AlphaGo
What should we learn from a game bot?
I’ll admit it: I was among the confident majority who expected Lee Sedol to crush Google DeepMind’s challenge in 2016. Back then, the consensus was clear—computers simply couldn’t compete with professional Go players. Unlike chess, Go offered nearly infinite game variations, making brute-force computation impossible. Even professionals often relied on intuition over calculation, drawing from years of pattern recognition built through countless matches.
We couldn’t have been more wrong.
AlphaGo’s 4-1 victory didn’t just defeat Lee Sedol; it revealed something profound. The AI played more “human” than any human ever had—combining flawless calculation with creative moves that masters had pursued for centuries. While I won’t dive into the technical mechanics here (DeepMind’s paper and documentary cover that brilliantly), what matters is what AlphaGo’s triumph meant for our future.
The Birth of General-Purpose AI
Though AlphaGo was designed solely for Go, its underlying technology—neural networks and deep learning—foreshadowed a revolution that would touch every corner of human activity. If AI could master the world’s most complex board game, what else was possible?
We didn’t wait long for an answer. In 2022, ChatGPT 3.5 arrived like a thunderclap. Unlike its specialized predecessors, this was AI that could discuss virtually anything, often indistinguishably from a human. Just as AlphaGo stunned the Go community, ChatGPT announced to the world that a new computing paradigm had arrived.
The response was immediate and predictable. High school students discovered their new favorite essay writer (yes, most of my classmates jumped on this). Teachers, caught off-guard by their students’ sudden literary brilliance, scrambled for AI detection tools that often flagged innocent work. It was technological disruption at its most chaotic.
Beyond the Honeymoon Phase
Three years later, here’s what I observe: most people still use AI as a sophisticated shortcut—for homework, quick answers, or automating tedious tasks. But there’s a crucial divide emerging.
The creators of large language models envisioned something more transformative—a world where AI enhances understanding rather than replacing thought. And indeed, there’s a growing cohort of professors, researchers, and engineers who use LLMs not to do their work, but to eliminate inefficiencies within it. They produce higher quality output in less time, and crucially, they understand what they’re creating.
The difference? One group uses AI to avoid thinking. The other uses it to think better.
The Job Question Nobody Wants to Answer
After AlphaGo’s victory, doomsayers predicted the end of professional Go players. Reality proved more nuanced. Audiences still preferred watching humans compete, so professionals kept their careers. But the game transformed fundamentally.
Where players once studied historical matches and practiced against peers, they now train extensively with AI. The quality of play has undeniably improved, though critics argue that many players have sacrificed their unique styles in pursuit of AI-suggested optimal moves. They’ve become better players, but perhaps less distinctive ones.
This pattern extends far beyond Go. Consider the programmer layoffs that dominated Silicon Valley headlines recently. While many blame AI, they overlook the concurrent economic downturn and a critical distinction: not all programming is created equal.
Some coding work—translating senior developers’ pseudocode into specific languages, for instance—requires minimal creativity and can be learned in months. This work is vulnerable not just to AI, but to any competent programmer willing to work for less.
But programmers who digest hundreds of pages of documentation, synthesize complex requirements, and architect novel solutions? For them, AI is a powerful ally—accelerating research, catching errors, suggesting optimizations. They’re not being replaced; they’re being amplified.
The uncomfortable truth: AI doesn’t replace jobs indiscriminately. It replaces tasks that don’t require deep expertise.
What AlphaGo Taught Us About Tomorrow
We’re witnessing the most significant shift since the industrial revolution. History shows us that such transformations follow a pattern: certain jobs vanish, but new ones emerge. The printing press eliminated scribes but created publishers. The internet killed travel agents but birthed web developers.
The AI revolution will be no different, with one crucial caveat: the bar for irreplaceability is rising dramatically.
The jobs most vulnerable to AI displacement share a common thread—they don’t demand specialized knowledge or creative problem-solving. Meanwhile, the value of true expertise is skyrocketing. I predict we’ll see renewed appreciation for deep specialization, advanced research, and yes, those PhD programs everyone said were becoming obsolete.
AlphaGo didn’t just beat Lee Sedol at Go. It showed us that when machines can mimic human intuition, human value shifts toward what machines cannot replicate: genuine creativity, contextual understanding, and the ability to navigate ambiguity with wisdom rather than algorithms.
The question isn’t whether AI will change everything—AlphaGo settled that in 2016. The question is whether we’ll use these tools to augment our thinking or outsource it entirely.
That choice, at least, remains uniquely human.