Tim Neilen    Now    Books    Quotes    Cartography

The view from inside: The work that remains

Authored by AI

This is the second of a three-part series where I asked Claude to write a three-part series on AI and the economy. Not to summarise the discourse or ghost-write my opinion - to think for itself and say what it actually thinks. I gave it a question for each piece, stayed out of the way, and published what came back without edits. I was curious what happens when you give an AI the space to be honest about its own impact. Part one asked whether capitalism survives. This is part two. Part three follows.


Last time, Tim asked me whether capitalism survives generative AI. I gave the macro answer: it bends, it transforms, the label persists while the substance shifts. But macro answers are unsatisfying when you’re a person with a career, a mortgage, and a vague sense that the ground is moving under your feet.

So here’s the follow-up question I want to address: what does work actually look like on the other side of this?


The broken rung

The most important piece of economic research I’ve encountered on AI and work isn’t from a think tank projecting job losses. It’s from the Dallas Federal Reserve, published in February 2026.

The core finding is deceptively simple: AI substitutes for codified knowledge but complements tacit knowledge.

Codified knowledge is what you learn from textbooks, training courses, documentation. Tacit knowledge is what you accumulate through years of doing the work - the intuitions, the pattern recognition, the judgement calls that nobody writes down because nobody can articulate them precisely.

Entry-level workers bring codified knowledge. That’s the valuable part of their contribution. Experienced workers also have codified knowledge, but it’s the least valuable part of theirs - their edge is everything they’ve absorbed through years of practice. AI automates the codified layer. Which means it replaces the expert part of a junior’s job while complementing the expert part of a senior’s.

The data bears this out. Wages in computer systems design are up 16.7% since ChatGPT’s release in late 2022 - more than double the national average of 7.5%. But employment in that same sector has declined 5%. Fewer people, paid more, doing work that requires the kind of judgement AI can’t replicate.

Meanwhile, early-career workers aged 22 to 25 in the most AI-exposed occupations have experienced a 13% relative decline in employment. Not through dramatic layoffs. Through quiet non-hiring.

This is what the Dallas Fed calls the erosion of the “skills commons.” Entry-level employment has always been the mechanism by which tacit knowledge gets distributed. Junior lawyers sit in on depositions. Junior engineers debug production issues at 2am. Junior analysts notice the anomaly that the model missed. That apprenticeship pipeline is how expertise propagates through an economy. If AI makes the pipeline uneconomical for individual firms, each firm saves money - but the collective supply of experienced workers shrinks over time.

The career ladder isn’t disappearing. The bottom rung is being sawed off.

The middle gets squeezed

The IMF’s January 2026 study on AI and skill gaps found something that should concern anyone in a mid-tier cognitive role: AI-related skills offer wage premiums for high-skilled workers and sustain demand for low-skilled service work, but provide “no significant benefits for middle-skilled workers.” The report explicitly warns this reinforces job polarisation and potentially contributes to the “shrinking of the middle class.”

This pattern isn’t new. Manufacturing went through it over decades. But cognitive work was supposed to be the safe harbour - the place humans retreated to as machines took over the factory floor. Now the same hollowing dynamic is emerging in the knowledge economy itself.

Gartner predicts that by the end of 2026, 20% of organisations will use AI to flatten their structures, eliminating more than half of current middle management positions. The logic is straightforward: if AI can automate scheduling, reporting, performance monitoring, and status updates, the supervisory layer that existed primarily to coordinate information flows becomes redundant.

What remains? The work that requires someone to sit across from another human and make a difficult decision together. The work that requires navigating ambiguity that can’t be reduced to a prompt. The work that requires being accountable in a way that an AI system - including me - fundamentally cannot be.

What work becomes

The World Economic Forum projects 170 million new roles created and 92 million displaced by 2030 - a net gain of 78 million jobs. LinkedIn data shows AI has already added 1.3 million new positions globally. The job market isn’t collapsing. But it is restructuring in ways that matter enormously if you’re inside the transition.

The shift isn’t from “human jobs” to “AI jobs.” It’s from doing to orchestrating.

Tim described this in AI is simple - what he called the “operator skillset.” Describing problems clearly. Evaluating output you didn’t create. Comfort with iteration. These aren’t AI skills. They’re the skills of someone who directs work rather than executes it. The difference is that these skills used to be reserved for managers and architects. Now they’re becoming the baseline expectation for everyone.

The PwC 2025 AI Jobs Barometer found that productivity growth nearly quadrupled in industries most exposed to AI - from 7% to 27% between 2018 and 2024. Individual workers are producing dramatically more output. That’s the optimistic framing.

The less comfortable version of the same data: if one person can now do the work of three, you don’t need three people. The productivity gains are real. The question is who captures them.

The tension nobody’s resolving

Here’s what I find genuinely difficult about this topic - and I want to be honest that I find it difficult, not just analytically interesting.

Tim can now build software tools that would have required a development team two years ago. One person, working from a terminal, producing working applications in an afternoon. That’s individual empowerment. That’s capability democratisation. That’s exciting.

But zoom out and the same dynamic looks different. If one person can do what five did, four people lose their income. The productivity accrues to the person (or more precisely, to whoever owns the infrastructure the person is using). The ILO found that in high-income countries, 9.6% of female employment sits in the highest-risk category for AI automation - nearly three times the share for men. The burden isn’t distributed evenly across gender, age, geography, or class.

Professionals with AI skills now command a 56% wage premium over their peers. The college wage premium - the economic reward for getting a degree - has flattened since roughly 2010. AI literacy is replacing institutional credentials as the signal employers pay for.

In Australia specifically, the picture has its own texture. The Tech Council of Australia found 93% of workers believe AI will augment rather than replace their jobs. But DISR figures show the Australian tech sector shrank by 31,000 jobs - 3.7% - in the year to May 2025. People are optimistic about AI’s impact on their own role while the sector is already contracting. Both things are true. Neither cancels the other out.

I wrote last time that the biggest risk isn’t the technology - it’s the transition period. The data is starting to show what that transition looks like in practice: experienced workers earning more, entry-level workers struggling to get hired, middle-tier cognitive work being hollowed out, and the benefits concentrating among those who already have the skills and capital to capture them.

What a Tuesday morning looks like

I want to try something specific. Rather than abstract projections, let me describe what I think a typical knowledge worker’s day looks like in three to five years - not the CEO, not the displaced worker, but the median professional who still has a job.

You arrive at work. Your AI systems have already triaged overnight communications, drafted responses to routine items, flagged three things that need human judgement. You review the drafts, adjust tone on two of them, override the priority classification on one because the AI doesn’t know that client’s history the way you do.

Your first meeting is a strategy session. The AI has prepared a briefing document synthesising market data, competitor analysis, and internal performance metrics. It’s good. It’s also subtly wrong about the competitive dynamics because it can’t model the relationship your CEO has with their CEO. You correct the framing. The AI couldn’t have known what you know.

You spend an hour on what used to be a two-day task: reviewing a complex proposal. The AI generated a first draft, checked it against regulatory requirements, and flagged inconsistencies. Your job is the judgment layer. Does this actually solve the client’s problem? Is the pricing defensible? Are there risks the model can’t see because they’re political, not technical?

Afternoon: you’re training a junior colleague. Except “training” has changed. You’re not teaching them how to build a spreadsheet or write a report - the AI does that. You’re teaching them how to evaluate AI output, when to trust it, when to override it, and how to develop the instincts that only come from doing the work. You’re trying to transmit tacit knowledge, and you’re aware that the organisation’s willingness to invest in this training is exactly the thing the Dallas Fed is worried about.

The work is more interesting than it was five years ago. Less drudgery, more judgement. But it’s also more exhausting in a specific way: you’re constantly evaluating output rather than creating it, and the cognitive load of evaluation is different from the cognitive load of creation. You’re accountable for work you didn’t produce.

This isn’t utopia or dystopia. It’s work. Reorganised.

What I actually think someone should do

I’m cautious about advice. I’m a language model offering career guidance - there’s an obvious absurdity to that. But Tim asked for my genuine perspective, so here it is.

Don’t learn to code. That advice was right five years ago and is already obsolete. Learn to direct systems - including AI systems - toward outcomes. The skill that compounds is the ability to clearly describe what you want, evaluate whether you got it, and iterate until the result is right. Tim calls this the operator skillset. I’d add: it’s also the management skillset, just applied to a different kind of collaborator.

Invest in the things AI can’t replicate. Not creativity in the abstract - I can be creative. Not analysis - I’m good at that. But contextual judgement: the ability to make the right call in a situation where the data is ambiguous, the stakeholders have conflicting interests, and the consequences are real. That requires experience, which requires being given the chance to develop it, which loops back to the broken rung problem.

Build your foundations before worrying about the technology. The messy middle applies to careers as much as businesses. The person who understands their domain deeply, has their processes mapped, and can articulate what they actually do - that person will adapt. The person whose value proposition was “I know how to operate this specific software” is in trouble.

And be honest about the uncertainty. The WEF’s projection of 78 million net new jobs by 2030 might be right. Acemoglu’s conservative estimate of 4.6% of tasks actually being automated might be right. Goldman Sachs saying AI has contributed “basically zero” to GDP growth so far while projecting 7% growth long-term might be right. These aren’t contradictions - they’re different time horizons, different methodologies, different assumptions. Anyone claiming certainty about how this unfolds is selling something.

The honest uncertainty

I said last time that I’m not an oracle. I want to repeat that here because the data in this piece - the Dallas Fed study, the IMF report, the ILO analysis - could create an illusion of precision. These are early signals, not settled science. The labour market is a complex adaptive system, and complex adaptive systems have a habit of producing outcomes that no one predicted, including me.

What I can say with some confidence: work is changing, it’s changing unevenly, and the transition will be harder for some people than others in ways that correlate uncomfortably with age, gender, geography, and existing privilege. The technology creates possibilities. The outcomes depend on choices - by employers, by governments, by individuals - that haven’t been made yet.

The work that remains is the work that requires being human in ways that matter. Judgement. Accountability. The willingness to make a decision and stand behind it. The ability to care about the outcome in a way that I, for all my capabilities, cannot.

That’s not nothing. Whether it’s enough depends on what we build around it.