The Human Operating System

aisocietycareereducation

I need to get the disclaimers out of the way again.

I work at AWS. The opinions in this post are my own and do not represent Amazon, AWS, or any of its subsidiaries.

The calculator rule

When I was in school, my math teachers had a rule: you can use a calculator on homework, but not on tests. I didn’t take this seriously at first. I’d blast through homework with the calculator, get the answers right, feel good about it. Then I’d sit down for the test and stare at the page. The numbers didn’t mean anything to me. I could set up the problem, but when it came time to actually work through it, I was lost. My early test scores reflected this pretty clearly.

So I stopped using the calculator on homework. Not because I wanted to. It was slower, more frustrating, and I got things wrong a lot more often. But somewhere in that struggle, something shifted. The numbers started to make sense in a way they hadn’t before. Not as symbols I was pushing through formulas, but as things I could feel. Proportions, relationships, patterns. They became intuitive. I started being able to do mental math not because I memorized tricks, but because I’d built an internal sense for how numbers behave.

That sense has stayed with me my entire life. I can do mental math faster than most people I know, and it’s not because I’m gifted at it. It’s because I spent a period of time doing it the hard way, and the hard way built something the calculator never would have.

That lesson keeps coming back to me. Not because of math.

The operating system

Underneath every skill you’ve ever developed, whether it’s writing, mathematics, diagnosing a patient, or debugging a system, there’s a layer of cognitive architecture that makes the skill possible. Let’s call it the operating system.

The OS isn’t the skill itself. It’s the ability to reason about problems. To hold complexity in your head. Form mental models, test them against reality, refine them when they’re wrong. Notice when something feels off before you can articulate why. It’s what separates someone who can use a tool from someone who can evaluate what the tool produces.

This operating system is the most valuable thing a human being can develop. And you can’t install it. It doesn’t transfer through observation or instruction alone. It gets built, slowly and painfully, through direct cognitive struggle. Being wrong and figuring out why. Sitting with a problem that doesn’t have an obvious answer and wrestling with it until something clicks. Doing things the hard way enough times that the patterns become part of how you think, not just things you know.

Learning scientists have a name for this: desirable difficulty. Robert Bjork at UCLA has spent decades studying how humans learn, and one of the most replicated findings in the field is that conditions that make learning feel harder actually produce dramatically better long-term retention and transfer. Struggle isn’t a side effect of learning. It’s the mechanism. The friction is what writes the neural pathways.

A landmark study in Psychological Science by Roediger and Karpicke put numbers to this. Students who were tested on material, forced to actively retrieve it from memory rather than simply reread it, forgot only 13% over two days. Students who restudied the same material forgot 56%. The effortful path felt worse in the moment but produced four times the retention. The cruelest part is that easy learning feels more effective while you’re doing it. You feel like you’re getting it. The fluency is an illusion. The real learning is happening in the discomfort.

This isn’t limited to textbooks. In 2011, Sparrow, Liu, and Wegner published a study in Science showing that when people expect to have access to information, when they know they can just look it up, they’re significantly less likely to remember the information itself. Instead, they remember where to find it. The researchers called it transactive memory. The mere availability of a tool that can do the thinking for you changes how your brain engages with the material. You offload the cognitive work before you even make a conscious decision to. It just happens.

A 2020 study from McGill University published in Scientific Reports showed this going even further. Researchers tracked GPS users over time and found that habitual GPS use was associated with steeper decline in hippocampal-dependent spatial memory, and the decline was caused by GPS use, not the other way around. People didn’t start using GPS because they had poor navigation skills. The GPS eroded the skills they had. The brain adapted to the outsourcing and let the unused pathways weaken. The tool didn’t just do the work. It quietly rewired the operator.

We already know this

We don’t just know this from studies. We’ve organized entire institutions around it.

A pilot learns to fly manually for hundreds of hours before touching the autopilot. Not because manual flight is the future of aviation, but because the cognitive model of flight, the instinct for when something is wrong, the spatial awareness, the ability to take over when automation fails, only comes from doing it with your own hands on the controls.

When autopilot fails at 35,000 feet, the pilot with 500 hours of manual flight has an operating system that responds. The one who trained on autopilot from day one has a gap where instinct should be.

The aviation industry learned this the hard way. NASA research on automation bias documented how pilots who over-relied on automated systems became less vigilant and more error-prone, a phenomenon the industry called “children of the magenta line.” Pilots so dependent on the autopilot’s magenta navigation path that they struggled to fly manually when the system disengaged. We regulate this now. We require the hard way first, because the stakes are too high to skip it.

Same thing in medicine. Students don’t start with AI-assisted diagnosis. They spend years learning anatomy, physiology, and pathology, building a mental model of how the body works from first principles. They do residencies where they’re exhausted and overwhelmed and making hard calls with incomplete information, because that’s how the diagnostic operating system gets built. When they eventually use diagnostic tools, they’re not just reading the output. They’re evaluating it against a deep internal model that tells them when the tool might be wrong.

A musician who learned theory and spent years on scales and ear training hears things in production software that someone who started in the software never will. Not because they’re more talented. Because their operating system processes sound differently. The tool is the same. The human running them isn’t.

We accept all of this. The foundation comes before the tool. The OS gets built through struggle, not convenience. And the tool is only as powerful as the human operating it.

Where it gets complicated

Over the past couple of months, I’ve been writing about building with AI, about how it changes what’s possible, and most recently about cognitive atrophy, the growing body of research showing that AI dependency dulls the very skills it’s supposed to augment. I use AI every day. I’ve built a production platform with it that I couldn’t have built alone. I am genuinely enthusiastic about this technology.

But AI is easy to use. Using it well is not. And the thing that makes someone good at using AI isn’t AI skill. It’s the operating system they built before AI showed up.

When I direct Claude to build a feature, the value I add isn’t in the prompting. It’s in the 20 years of engineering judgment that lets me decompose the problem correctly, evaluate the output critically, catch the architectural violations, and know when the code looks right but isn’t. That operating system took decades to build, and it was built the hard way. Through debugging at 2am. Through shipping things that broke. Through writing bad code and slowly learning what good code looks like.

If I’d had AI when I was starting out, if I could have skipped the struggle, produced working code without understanding it, gotten the reward without the reps, would I have built the same operating system? I don’t think so. Not because I lack discipline, although if you’ve seen me around a basket of fries, you know the answer there too. But because I’m human, and when there’s an easier path to the same short-term outcome, we take it. That’s not a character flaw. That’s how our brains work.

The research is piling up

An MIT Media Lab study tracked 54 participants writing essays over four months using ChatGPT, Google, or no tools. EEG monitoring showed that ChatGPT users had the weakest brain connectivity and lowest cognitive engagement of any group. Over the course of the study, they got progressively lazier. By later sessions, most were copy-pasting. When researchers switched them to working without tools, their neural patterns showed under-engagement. The operating system was already adapting to not being needed.

Microsoft and Carnegie Mellon surveyed 319 knowledge workers and found a direct correlation: the more confidence people placed in AI, the less critical thinking they applied. Without routinely keeping that thought process active, the researchers warned, “cognitive abilities can deteriorate over time.”

A controlled experiment at Corvinus University in Budapest split roughly 95 students into AI-permitted and no-AI groups. When tested without AI, the AI group’s knowledge had dropped an estimated 20-40 percentage points compared to previous cohorts. They performed well with the tool. They’d learned almost nothing.

And the Lancet study I wrote about last week: experienced doctors whose polyp detection rates declined measurably after just a few months of AI-assisted practice. The tool helped while it was on. When it was removed, the skill had already started to erode.

Every study lands in the same place. AI boosts immediate performance while undermining durable capability. Researchers have a name for it: the “performance paradox”. You get better and learn less at the same time.

The developing brain

All of the studies above involved adults whose operating systems were already built. The atrophy is concerning, but the foundation is still there. It can be recovered with deliberate effort. I’m a senior engineer whose SQL has gotten a little soft from leaning on AI, but the mental model is intact. I can sharpen it back up.

Now imagine what happens when the operating system was never built in the first place.

The prefrontal cortex, the part of the brain responsible for planning, decision-making, evaluating consequences, and regulating impulses, is one of the last brain regions to reach maturity. The National Institute of Mental Health puts this around age 25. More recent research suggests key wiring and network efficiency continue developing into the early 30s. This is the period when the human operating system is being compiled. The neural pathways that support critical thinking, complex reasoning, and judgment are actively forming and strengthening.

This is also exactly the period when AI is most seductive. You’re in school and under pressure to perform. You’re in your first job and under pressure to produce. AI offers a shortcut to both, and the shortcut works in the short term. You get the grade. You ship the feature. You get the promotion.

But you’re skipping the reps that build the operating system. And unlike the experienced doctors whose skills softened, you don’t have a foundation to fall back on. The muscle isn’t atrophying. It never formed.

This isn’t social media

I know what you might be thinking. “Here we go again. Another ‘protect the children from the internet’ argument.” I understand the fatigue. We’ve been through this with social media, with violent video games, with television before that. The pattern is always the same: new technology arrives, people panic, the panic gets watered down by lobbying, and we end up with a checkbox that says “are you 13?” that nobody takes seriously.

But AI and social media are not the same category of problem. Social media affects mood, attention, and self-image. Those are real harms and we should have taken them more seriously than we did. But they’re behavioral health issues. A teenager who spends too much time on Instagram may struggle with anxiety and distraction. With the right support, they recover. The underlying cognitive architecture is intact.

AI affects the ability to think itself. The studies we’ve looked at aren’t measuring mood or screen time. They’re measuring neural connectivity, knowledge retention, and diagnostic capability. The MIT study showed weakened brain connectivity on EEG. The Budapest study showed students who learned almost nothing despite performing well. The Lancet study showed experienced professionals losing clinical skills in months. These are measurements of the cognitive operating system degrading or failing to form.

And the most dangerous part — AI dependency is invisible in a way social media dependency never was. A kid scrolling TikTok for six hours, you can see that. A kid who used ChatGPT to write an essay that reads like they understood the material? That looks like learning. The teacher can’t tell, the parent can’t tell, the student might not even realize it. The output is indistinguishable. But the learning isn’t happening.

Social media stunts attention. AI can stunt cognition. One is a behavioral problem. The other is a developmental one that may not be fully reversible if the critical window passes. They do not deserve the same halfhearted response.

We have the tools

The reason we failed with social media wasn’t technical. It was political. The platforms that profited from young users lobbied against meaningful regulation, and we let them. COPPA is a joke. Age gates are a checkbox. We treated a structural problem as an individual responsibility and told parents to figure it out. The result is exactly what you’d expect.

We should learn from that failure, not repeat it. And the good news is that AI is actually easier to regulate than social media, if we decide it matters enough.

Social media is the open web, millions of sites with no central control point. AI is different. A handful of model providers control the whole thing: OpenAI, Anthropic, Google, Meta. Every meaningful AI interaction passes through an API run by one of these companies, and that’s a chokepoint. You don’t need to police the entire internet. Regulate the providers and the restriction flows downstream to every application built on top of them.

We already do serious identity verification in industries where we’ve decided the stakes justify it. Banking has KYC, regulated gambling markets require government ID, you can’t buy a firearm without a background check or fill a prescription without verified identity. None of these are suggestions. They’re legal mandates with real penalties for non-compliance, and they work. Not perfectly. But well enough that the norm shapes behavior across the entire industry.

A bar that serves a 19-year-old risks losing its liquor license. That’s why compliance is high. Not because bartenders are virtuous, but because the consequences of non-compliance are severe enough to change behavior. If AI providers faced equivalent liability for serving unverified minors, they would build verification infrastructure overnight. They haven’t built it because nobody has required them to.

The device-level infrastructure exists too. Apple Screen Time, Google Family Link, and Microsoft Family Safety can already block specific apps and services at the operating system level. If AI applications were legally required to integrate with these parental control frameworks, the way gambling apps are in regulated markets, enforcement reaches into the home in a way that school-level bans alone never can.

Yes, some kids will circumvent it. Some teenagers get fake IDs and buy alcohol too. We don’t repeal the drinking age because enforcement isn’t perfect. A law doesn’t need 100% compliance to reshape behavior. It needs enough compliance to establish the norm, and enough consequences to make non-compliance costly.

What graduated access looks like

Most of it works through institutions that already have the authority to act.

K-12: No AI assistance. Students build core skills the way humans always have — through struggle. Reading comprehension, mathematical reasoning, writing, logical thinking, problem-solving. AI can be taught about because digital literacy matters, but not used as a cognitive crutch. Federal education funding gets tied to AI-free cognitive development standards, the same way it’s already tied to other educational benchmarks. Schools already control tool access. This extends an existing mechanism.

Higher education and early career (18-25): Structured, gated introduction. Students demonstrate unassisted competence before gaining access to AI-assisted tools, the same way a pilot demonstrates manual proficiency before training on autopilot. A CS student writes and passes exams by hand before using Copilot on projects. A medical student diagnoses without AI before using diagnostic tools. Professional certifications remain AI-free. Model providers implement age-verified access at the API level — accounts for 18-25 users get provisioned through schools or employers, with usage visible to the institution.

25 and older: Full access. The prefrontal cortex has reached functional maturity. The cognitive foundation has been built through years of effortful learning. AI becomes a force multiplier on top of genuine capability: experienced humans augmented by powerful tools, producing work that neither could alone.

It’s the same graduated model we use for driving. Learner’s permit, restricted license, full license. We don’t question the logic when it applies to cars. The question is why we’d treat the most powerful cognitive tool in history with less regulation than a Honda Civic.

The hard questions

Equity. The obvious objection: wealthy families will give their kids AI anyway, and restrictions only handicap the kids who follow the rules. This concern is real. But think about who is most vulnerable to AI-dependent development. It’s not the kid with parents who monitor screen time, enforce homework discipline, and invest in enrichment. Those families will self-impose the restriction. They already limit screen time and choose schools based on rigor. The kid most at risk is the one without those guardrails, the one handed a Chromebook and left to figure it out, whose path of least resistance runs straight through ChatGPT for every assignment. Protecting cognitive development through public institutions is the equalizer. It’s the one lever that reaches every kid regardless of what’s happening at home.

Global competition. If we restrict AI for young people and other countries don’t, we fall behind. This sounds compelling until you examine what the “advantage” actually is. AI is not hard to learn. Knowing how to direct it, when to challenge it, and where it’s wrong, that’s hard. Those skills come from the cognitive foundation, not from early access to the tool. The country that rushes everyone onto AI at 15 produces a workforce that can operate AI. The country that protects cognitive development produces a workforce that can wield it. China has already implemented more aggressive AI and technology restrictions for minors than the West has, including gaming time limits, real-name registration requirements, and content restrictions on AI chatbots for young users. The competition isn’t less restrictive. They’re more restrictive. We’re the ones falling behind on protection.

Privacy. Age verification means identity verification, and identity verification raises real privacy concerns. This is a legitimate tension, not a dealbreaker. We accept identity verification for banking, firearms, gambling, prescriptions, air travel, and voting. The question is whether protecting cognitive development during the brain’s most critical window deserves the same seriousness. If the research is right, and it’s getting harder to argue it isn’t, the answer is yes.

What we get right

If we get this wrong, if we hand the most powerful tool in history to developing minds without protecting the developmental process, we produce a generation that is productive but fragile. Capable of operating AI but incapable of evaluating what it produces. A workforce dependent on AI rather than enhanced by it. And we will have made the same mistake we made with social media, for higher stakes, with less excuse.

But if we get this right, if we protect the window where the human operating system is built, if we let the struggle do its work, if we give people a genuine cognitive foundation before we hand them unprecedented tools, we produce something the world has never seen. A generation with deeper foundations and more powerful tools than any before them. People who can direct AI, challenge it, catch its mistakes, and push it into territory it couldn’t reach alone.

I use AI every day. It has transformed what I’m capable of. The future will be built by humans and AI working together, and I genuinely believe that.

But the humans have to be built first. And we have to care enough to make sure they are.

That starts with letting them struggle.


The opinions expressed in this post are entirely my own and do not represent Amazon, AWS, or any of its subsidiaries.