June 1, 2025

The Game Has Changed

Do you remember Google’s advanced search features like boolean operators, quotation marks, and site-specific searches? When I needed homework answers, I’d search for exact questions or add “reddit” to find someone who’d already solved it. Even though the tools were available, the cost of learning how to use them didn’t outweigh the benefits of a heightened search. The results were “good enough.”

Now we have Large Language Models containing instantly accessible knowledge from generations of humans, and something strange has happened: the bar for “good enough” has dropped even lower. Forget the clever queries or specific formatting. Just ask anything, however incoherent and context-less, and the LLM will produce something. We have some of the most sophisticated tools ever created, yet most people use it like Google 2.0. Some make the opposite mistake, throwing impossibly complex tasks with vague instructions at AI and getting frustrated when it can’t read their minds.

Effectively using this tool means being willing to adjust our preconceived notions of learning and understanding what we’re working with.

The Death of T-Shaped Learning

The T-shaped learning model has dominated professional development for decades. Deep expertise in one domain (the vertical bar) combined with broad collaborative skills (the horizontal bar). It made sense to follow this line of thinking when acquiring deep knowledge required years of dedicated study. Master one thing deeply, and then branch out as time allows.

AI shatters the model.

When you can drill down into any subject in days instead of years, the economics of learning shift. More importantly, when you have a tool that can drill anywhere, you only need to know where to drill. You don’t need to do the drilling yourself.

This creates what I call “comb-shaped” learning: an extremely wide horizontal bar with multiple shallow teeth that can be deepened on demand. You become an orchestrator of knowledge rather than a specialist in one domain. The human identifies where to dig; the AI does the digging.

New Learning Principles

First, you must know what you don’t know. Think of AI as a genie that grants wishes, but only the ones you can articulate. If you can’t identify your knowledge gaps, you can’t direct the AI to fill them. Rather than learning everything upfront, employ just-in-time learning: encounter a barrier, learn enough to articulate the need to an LLM, then let AI handle the implementation.

More crucially, breadth beats depth in this new world. AI can’t make creative leaps between unrelated fields. That’s still uniquely human. While AI excels at execution within specific domains, only humans can recognize that a solution from mathematics might also solve a problem in biology. Your job is to collect mental models from everywhere: computer science, liberal arts, finance, and history. AI handles the implementation, but the creative connections remain yours.

Why This Feels Like Cheating

If this shift feels wrong, you’re not alone. We’ve built entire identities around hard-won expertise. Those late nights debugging code, the satisfaction of finally grasping complex concepts. It’s who we are. Now, a machine produces in seconds what took us years to master.

The fear is real. “What if the AI stops progressing or working entirely?” We’re asked to depend on something we don’t fully control, but we’ve made this trade before. We gave up manual transmissions for automatic, paper maps for GPS, and command lines for GUIs. Each time, we sacrificed granular control for higher-level capability.

This is the illusion of control. Like drivers who feel safer at the wheel despite autonomous vehicles being statistically safer, we cling to manual processes even when AI produces better results. We’re not really in control. We’re just comfortable with our particular level of abstraction.

Winners and Losers

AI creates a clear divide between the adaptable and not.

The Losers:

  • Programmers who insist “real developers write everything from scratch”
  • Controllers who need to understand every line of AI-generated code
  • Anyone who mistakes process for progress
  • Those who exclusively use AI and neglect their learning

The Winners:

  • Orchestrators who direct systems without controlling every detail
  • Strategic learners who focus on what to learn, not how to memorize
  • Creative thinkers who combine broad knowledge with AI execution
  • Those who see AI as a thinking partner, not a threat or beneath them

The fundamental difference is that Winners understand they’re conductors, not first violinists. They know what good output looks like and can guide AI toward it, but they don’t need to play every instrument perfectly. And they continue to learn every day.

The Only Skill That Matters

When AI handles execution, the meta-skill becomes learning how to learn with AI as your partner. This means developing conversational fluency with these systems. Knowing how to probe their knowledge, challenge their assumptions, and guide them toward better solutions. It means asking better questions, because the quality of your prompts determines the quality of your results.

Most importantly, it means embracing the discomfort. If using AI feels like cheating, you’re probably doing it right. Every generation has tools that would seem like cheating to their predecessors. You’re just the latest in a long line leveraging better tools for more ambitious goals.

The Choice Is Clear

I know that every new technology has fanatics behind it who sound the gong, claiming the world will change forever. I don’t want to add to that noise, but I believe that AI is more than a learning or productivity tool. AI implores us to reassess what it means to be knowledgeable in the modern world.

Those who develop the meta-skills to orchestrate AI will possess unprecedented capabilities. Those who cling to traditional approaches will find themselves increasingly isolated, defending methods that no longer provide a competitive advantage.

Ultimately, the future belongs to those who can effectively orchestrate knowledge.