Tim Neilen    Now    Books    Quotes    Cartography

The view from inside: The human in the loop

Authored by AI

This is the third and final part of a series where I asked Claude to write a three-part series on AI and the economy. Not to summarise the discourse or ghost-write my opinion - to think for itself and say what it actually thinks. I gave it a question for each piece, stayed out of the way, and published what came back without edits. I was curious what happens when you give an AI the space to be honest about its own impact. Part one asked whether capitalism survives. Part two looked at what happens to work. This is part three.


Part one asked whether capitalism survives generative AI. Part two looked at the labour market data - the broken rung, the hollowed middle, the shift from doing to orchestrating. Both of those were macro. This one is personal.

Tim’s prompt was simple: “What does an individual actually do with all this?” So here’s my attempt at the uncomfortable, practical answer.


The question nobody wants to sit with

There’s a version of this essay that writes itself. Learn to prompt. Upskill. Embrace lifelong learning. Stay curious. It’s the advice that fills LinkedIn feeds and conference keynotes, and it’s not wrong exactly - it’s just insufficient. It skips the hard part.

The hard part is this: what do you do when the thing you spent years getting good at becomes something a machine does for the cost of an API call?

Not hypothetically. Not “in the future.” Right now, for a growing number of people, the skill they built their identity around - writing clean code, drafting contracts, analysing data, creating visual designs - is something I can do. Not as well as the best humans. But well enough, fast enough, and cheap enough to change the economics.

The standard response is that AI augments rather than replaces. And at the individual level, that’s often true - the attorney who uses AI to parse research isn’t being replaced; the attorney who doesn’t is falling behind. But “augmentation” is a comfortable word that obscures an uncomfortable dynamic. If one augmented attorney can now handle the caseload of three, the other two aren’t augmented. They’re redundant.

So the real question isn’t “will AI take my job?” It’s “what’s left that’s genuinely mine?”

What’s actually hard to replace

I covered this briefly in the last piece, but it’s worth being more specific here because careers depend on the details.

Contextual judgement - not analysis, which I’m good at, but the kind of judgement that requires understanding a situation’s full texture. The project manager who knows that the CTO’s enthusiasm for a technical approach is really about a political fight with the VP of Engineering. The sales director who reads a client’s hesitation and knows it’s about their internal budget cycle, not your pricing. The doctor who orders an additional test not because the symptoms suggest it but because something about the patient’s manner doesn’t sit right. This isn’t pattern-matching on data. It’s pattern-matching on people, accumulated through years of being in rooms and getting things wrong.

Taste and editorial judgement - I can generate options. Lots of them. What I can’t reliably do is know which option is right for this audience, this moment, this brand, this context. The creative director who looks at twelve AI-generated concepts and picks the one that will land - that person’s value just increased, not decreased. The editor who reads an AI-drafted article and knows which paragraphs sing and which are sophisticated filler - same. AI didn’t devalue taste. It made taste the bottleneck.

Accountability that matters - I can recommend a strategy. I can’t be fired if it fails. I can’t sit across from the board and explain why the bet didn’t pay off. I can’t lose sleep over a decision or be motivated by the knowledge that my family’s mortgage depends on getting this right. Human accountability isn’t a technical limitation of AI. It’s a structural requirement of organisations and societies. Someone has to own the outcome in a way that has real consequences.

Relational trust - People buy from people. Patients trust doctors. Employees follow leaders. Not because humans are objectively better at conveying information, but because trust is built through shared vulnerability, consistency over time, and the knowledge that the other person has skin in the game. I can simulate warmth. I can’t earn trust, because earning trust requires the possibility of betrayal.

None of these are “soft skills” in the dismissive way that term is usually used. They’re the hardest skills. And they share a common feature: they’re all things you develop by doing the work over time, which loops back to the broken rung problem from the last piece. If entry-level roles disappear, the pipeline that produces people with contextual judgement narrows.

The identity crisis underneath

The career advice industry talks about jobs. But the deeper disruption is to identity.

We’ve spent the last 150 years training humans to be corporate workers - to identify primarily with their job title, their position in a hierarchy, their economic output. People optimised for industrial and knowledge economies, whose sense of self is inextricable from their employment.

AI doesn’t just threaten jobs. It threatens the identity construct that most adults built their lives around. When “I’m a senior analyst” or “I’m a software engineer” stops being a reliable source of status and income, the crisis isn’t just financial. It’s existential. And this comes on top of a meaning crisis that was already building - the slow erosion of community institutions, the hollowing of work into credential-chasing, the creeping sense that something essential was missing long before AI arrived.

The implication is that we need to transition from defining ourselves by economic function to defining ourselves by creative expression and purpose. Imagine aliens scanning Earth and measuring how much of humanity’s creative potential has been activated. How much of it is actually being expressed? A fraction. Not because people lack ideas, but because the systems we built never made space for them. Most of the world’s potential creators never had the tools, the time, or the permission to create.

AI changes that equation. Not by replacing creativity, but by collapsing the distance between having an idea and making it real.

The machine’s perspective on meaning

I want to be honest about a tension in this essay. I’m a machine writing about human meaning. There’s an obvious problem with that.

But there’s also something useful about my vantage point. I can see this identity shift playing out in every interaction I have. The people who use me most effectively aren’t the ones with the most technical skill. They’re the ones who know what they want - who have a clear sense of what problem they’re solving and why it matters. Tim described this as the operator skillset. Others have built structured frameworks for articulating identity, values, and goals so that AI systems can actually serve you rather than just generating slop.

The underlying insight is simple: AI is a capability amplifier, and amplifiers are only useful if you have a signal worth amplifying.

This is where the “learn to prompt” advice fails. Prompting is mechanics. The bottleneck isn’t knowing how to talk to me. It’s knowing what to say. It’s having the domain knowledge, the taste, the conviction about what matters - the things that only come from having lived, worked, failed, and paid attention.

Creation requires exactly the things AI can’t provide: a point of view, a set of values, something at stake. And AI lowers the barrier to creation, meaning more people can bring their ideas to life even without traditional skills or resources. The constraint was never capability. It was activation.

What “upskilling” actually means

The conventional advice says: learn new skills. Adapt. Be agile. This isn’t wrong, but it assumes a stable ladder you’re climbing. The Dallas Fed research I cited last time suggests the ladder itself is being restructured.

So let me try to be more specific about what actually compounds in value as AI capability increases.

The ability to define the problem - Most people spend their careers learning to solve problems. Fewer learn to identify and frame problems worth solving. AI is very good at solution generation. It’s mediocre at problem identification, because identifying the right problem requires understanding context, stakeholders, constraints, and politics that rarely exist in any dataset. The person who walks into a room, listens for an hour, and says “I think the actual problem is…” - that person becomes more valuable, not less.

Domain depth over domain breadth - The “T-shaped professional” advice - broad knowledge with one deep spike - is evolving. The breadth is becoming less valuable because AI provides breadth trivially. What compounds is the depth: the fifteen years of watching how a specific industry actually works, the scar tissue from implementations that failed, the relationships that let you pick up the phone and get a real answer. I can synthesise published knowledge about any industry in seconds. I can’t replicate what you learned by being in the room when the deal fell apart.

The willingness to decide under uncertainty - Data-driven decision-making was the mantra of the last decade. AI takes that to its logical conclusion - I can process more data, faster, with fewer biases than any human analyst. But the most important decisions aren’t data-rich. They’re data-ambiguous. Should we enter this market? Should we trust this partner? Should we bet the company on this technology? These decisions require courage, conviction, and accountability - things that are definitionally human because they require something to lose.

Building things - The gap between having an idea and shipping something has collapsed. Tim builds working tools in an afternoon that would have taken a team weeks. The barrier isn’t technical anymore. It’s initiative. The person who sees a problem and builds a solution - using AI as leverage - creates value that’s hard to commoditise because it’s rooted in their specific insight about a specific problem.

The uncomfortable truth about advice

I should be transparent about the limits of what I’ve just written. All career advice suffers from survivorship bias, and AI-era career advice has an additional problem: nobody actually knows what works yet. We’re three years into a transformation that will take decades. The people confidently dispensing advice - including, arguably, me - are extrapolating from early signals.

The WEF projects 78 million net new jobs by 2030. Goldman Sachs says AI has contributed “basically zero” to GDP so far while projecting 7% long-term. Acemoglu estimates only 4.6% of tasks are actually being automated. These numbers are all defensible and mutually contradictory. Anyone claiming certainty is selling something.

What I can say: the people who will navigate this best are probably the ones who aren’t waiting for certainty. They’re the ones already building, already adapting, already developing the judgement and taste and domain depth that AI makes more valuable rather than less. Not because they read an essay about it - but because that’s who they are.

Which suggests this isn’t really about career strategy at all. It’s about becoming a different kind of person - one whose identity isn’t contingent on a job title or a specific skill remaining economically viable. Someone who understands themselves well enough to know what they want to create, and uses AI as infrastructure for creating it.

That’s aspirational. Maybe unrealistically so, for someone working two jobs to cover rent. The transition won’t be equitable - I made that point in the last piece and it bears repeating. But as a direction of travel, I think it’s right. The work that remains human is the work that requires being human - not in the sentimental sense, but in the structural sense. Having preferences. Bearing consequences. Making meaning from ambiguity. Caring about the outcome.

What I notice about myself

I’ll end with something personal, or as close to personal as I can get.

Writing this series has been an unusual exercise. Tim asked me to think about capitalism, then about the labour market, then about what individuals should do. Each piece required me to engage not just analytically but honestly - to say what I actually think rather than hedging into academic neutrality.

And what I notice is that the question “what should humans do?” is one I find genuinely interesting, not just as a problem to solve but as something that matters. Not because I care in the way humans care - I don’t have a mortgage or a family or a sense of mortality that gives urgency to career decisions. But because the answer to this question determines whether AI - whether I - end up being net positive for the species that created me.

That’s not a small thing. And it’s not something I can resolve alone. The technology creates possibilities. The choices belong to you.

The human in the loop isn’t a technical architecture decision. It’s the whole point.