{ in·deed·a·bly }

adverb: to competently express interest, surprise, disbelief, or contempt

AI

Hello to my imaginary friends on the internet, it has been a while since I slipped into the pseudonymous character of {in·deed·a·bly} and rambled into the void. Much like my favourite pair of jeans, the character is slightly more confining than it once was, but with some strategic bellybutton withdrawal it still just about fits.

I wonder if you are still out there? If you ever existed at all?

The past year has been an interesting one.

Early in my career, I attempted to surf the wave of change brought by the arrival of the internet. At the time I could see it was going to change everything. And it did, in the sense that it changed how we do a great many things, but for the most part not what we did nor the behaviours determining why we did it.

Just-In-Time expanding from the 1970s Toyota production lines to an on-demand way of life.

Shopping evolving from battling the unwashed masses in department stores to same-day gratification (followed by next-day regret) from the “everything store” without leaving home.

Stalking, voyeurism, and living vicariously became mainstream. Hopes and dreams homogenised as envy levelled up. Those humble neighbourhood or workplace Joneses we once imitated were replaced by self-aggrandising social media celebrities of whatever tribe we happened to identify with. Andrew Tate. Charli D’Amelio. The Kardashian clan. Tim Ferriss. Warren Buffett.

Hungry?

Horny?

Lost?

Bored?

Pretty soon there was an app for that.

Generations ago, manufacturing workers learned the hard way that their ability to perform simple repetitive tasks was universal. A confluence of circumstance having created an intersection of logistics, supply chain, and economies of scale to provide them a protective moat.

Until it didn’t.

The moment another location could compete on price, with good enough quality, the jobs followed the money. Exactly as Adam Smith described market behaviour, some 250+ years ago.

That historical lesson provided an instructive backdrop to the waves of outsourcing and offshoring that rolled through the professional services landscape throughout my working life.

First the internet. Later the cloud. Technology advances enabling knowledge workers to do their thing from anywhere. Pandemic lockdowns proved it worked at scale. The simple repetitive tasks performed by brains rather than brawn, but the economics and outcomes were the same.

Then as now, the moral of the story was survival relies on relevance.

Ply your trade in a profit centre, not a cost centre: contribute more than you cost.

If you must live somewhere expensive, ensure you’re selling something that buyers value doing in person. Consultancy, business analysis, and hospitality to name but a few.

Occasionally, perhaps once in a generation, there is an innovation that represents a step change in the progression of an otherwise predictable linear progress curve. Same direction of travel, only something happens to get us there much faster.

The wheel.

The printing press.

The production line.

The internet.

Despite what the spruikers and charlatans proclaimed, the next such innovation was not “big” data. Nor blockchain. Nor even the near-universal access to internet-enabled mobile phones.

But it is “AI”. Now that it has reached a level of being “good enough”. At a “cheap enough” price.

I don’t mean AI in the sense of talking to one of those annoying customer service chatbots.

Nor being second-guessed by an overeager co-pilot while writing code, emails, or blog posts.

Neither do I mean arguing via a question-and-answer style prompt interface, like those admittedly remarkable offerings provided by OpenAI and Anthropic.

No, the artificial intelligence I’m talking about is the taking of any rules-based, simple repetitive task, and having a robot perform it instead of a human.

To illustrate the concept, consider the field of medical imagining.

Once upon a time, your doctor would interpret your x-ray in the room where you waited for treatment.

Later, interpreting scans was outsourced to bureau diagnostic services, where a (sometimes) more experienced and (always) lower-cost medical practitioner would attempt to divine malady from image.

Today, those lower-cost medical practitioners are being replaced by AI. Still applying the same sorts of experience-based educated guesses, only now that experience is drawn from studying more images and outcomes than any dozen doctors could acquire in their professional lifetimes.

Tough on a doctor’s ego though it may be, when correctly trained and monitored, the AI will do a more consistent and sometimes better job than they could.

No need to sleep. Take holidays. Get distracted. Suffer from human error.

Instead, they apply rules-based judgement to a simple repetitive task. One that once required highly skilled workers who had completed 10+ years of tertiary study, while incurring tens or hundreds of thousands of dollars in student loans.

The end result?

Fewer experienced humans are required to process vastly higher numbers of scans. The role focus evolving from curing patients to sample-based marking of a robot’s homework.

Faster and cheaper service delivery, provided at a “good enough” level of quality. A net win.

Not perfect. But human doctors can’t correctly diagnose all maladies all of the time either.

The illustration highlights the continuation of that same journey experienced by the production line workers of generations past. First their jobs went to ever lower-cost locales in a race to the bottom, before starting to disappear entirely in the face of automation.

Following the well-trodden path of structurally redundant blacksmiths, elevator operators, lamp lighters, and nightsoil collectors before them.

One of the reasons this past year has been interesting is I’ve been running projects exploring how to actively do to other professions what is being done to the medical imagining fraternity.

Of course that isn’t how it is being sold! Instead, the usual corporate double-speak messaging along the lines of “freeing up valuable time to focus on more productive activities”.

And it is true, to an extent. Those who survive will be more productive. What isn’t spoken of is the fact they will have far fewer industry peers and colleagues to converse with or learn from.

The focus of these projects has been professional services.

That broad collection of white-collar jobs. Where typically well-educated workers perform (mostly) simple repetitive rule-based tasks, in return for comparatively high wages. Think accountants, analysts, developers, engineers, planners, traders, and lawyers.

But here is the thing: any role that can be distilled to a set of simple repetitive rules or pattern-based activities will be replaced by AI sooner or later.

Analysing legal wordings. Engineering technical solutions. Executing trading strategies. Parsing company financials. Rebalancing portfolios. Writing code.

All professional undertakings that, at their core, involve applying a known set of principles, methods, rules, standards, and techniques to a problem. Most of which can be distilled down to simple proven patterns and logically sequenced decision trees. In other words: solved problems.

Occasionally, there will be edge cases or genuinely new problems for which there are not (yet) accepted solutions. But it turns out, despite the outraged claims of heresy made by those of us who earn a living performing these functions, these are increasingly few and very far between.

Be warned, once seen, this direction of travel cannot be unseen.

Of course, there is a danger that when holding a hammer everything starts resembling a nail. Not all these projects have proven successful. Some spectacularly so! But most exceeded expectations, realising benefits faster and larger than originally planned.

A rare outcome, in my professional experience. The scope of realistically automatable activities broadening as fast as the current arms race of technical capabilities rapidly evolves.

These endeavours have also provided insight into the curious approach adopted by London’s militant public transport trade unions. They no longer exist to protect the lot of the many, but rather to achieve the best possible outcomes for the few who will survive!

Conventional wisdom says every one of those step change innovations created vast numbers of new jobs. What is discussed less often is how disruptive that change was on the incumbents.

The former production line workers did not become the robotic engineers who replaced them.

The lamplighter did not become the electrician keeping streetlights functioning.

The nightsoil collector was not retrained to become an overcharging yet unreliable plumber.

They were too old to make retraining cost-effective.

Their existing skills and mindset too institutionalised to be open to learning a new trade.

Their egos too fragile to go from having seniority and status in one profession to the starting over at the bottom of another. The world’s oldest apprentices.

Their present locale contained their homes. Families. Social support networks. Communities. New opportunities were often located vast distances away, at the end of untenable commutes.

What became of them? Natural selection: as described by Charles Darwin 150+ years ago. Look no further than Detroit. The less salubrious parts of Northern England. Or the slowly dying rural towns once dependent on small scale family farms, and now located outside commuting distance to the cities where the jobs are.

How do I think the AI change plays out?

Does last year’s fun fad of Retrieval-Augmented Graphs carry the day? Improving confidence in generically generated content by supplementing with authoritative externally sourced facts. Tailor the sources to a specific niche or domain, and the utility of the resulting answers improve.

How about the current new shiny of Agentic AI? An autonomous imaginary friend taking care of things so you don’t have to. Driving your car. Executing your options trading strategy. Negotiating with my autonomous virtual assistant at the speed of light, so you and I reach an agreement before we became aware the discussion was taking place.

The answer is probably not.

The AI evangelists talking up these solutions are mostly looking at their feet rather than at the horizon. So too the fan bois obsessing over benchmarks and conducting branded religious wars about whether Claude is better than Google Gemini or ChatGPT. When all are “good enough”, who really cares?

Instead, these approaches and their successors will become ubiquitous. At the same time they will fade into the background. We’ll stop noticing them. Then stop thinking about them. In the same way we stopped thinking about electricity, computer networks, or how engines work.

They become solved problems.

Much like any new field, most AI early adopters will fail. Already those shilling custom large learning models are finding themselves getting run over by their cheaper and now superior generic peers.

Incumbents will acquire the promising intellectual property and exciting prospects. Adapting and evolving to incorporate the good bits in their existing operations. A couple of the upstarts will thrive and survive. Most won’t.

Stepping down from the macro view, individually we need to prepare ourselves to use the emerging approaches to become 20x more productive or be left behind by those who are. The new pace will feel uncomfortable to us older folks, but normal for the next generation who know no different.

An eye-watering number of knowledge workers will join the ranks of the angry, disaffected, populist masses. The Brexiteers and the Trumpanzees, seduced by the siren songs of easy answers and blaming “the other”. Shouting at the unfairness of it all, as the uncaring world carries on without them.

It will hurt, just like previous major changes hurt. However, society will eventually absorb disruption and adapt to the change.

To use a technical term, professionally, I am screwed.

Both my role and my profession are readily replaceable by these emerging technologies.

Indeed, I already generate most of my outputs using AI, allowing me to be 5x more productive than my colleagues who still believe they add value while hand crafting commodity answers to solved problems. Short term survival means not being the slowest!

Longer term, like many knowledge based professions, our roles involve applying simple rule or pattern based solutions to recognisable problems. Exactly the sort of thing AI is replacing in other professions.

Should I have needed to hold out until private pension accessibility age, I could probably just about contrive to survive by limping to ever smaller and less technologically able sites. Sustainable for a decade maybe, but not much longer than that.

When my teenage children ask me for guidance on which professions will provide the equivalent stability and relatively affluent standard of living they have enjoyed growing up, I honestly don’t know what to tell them.

For my generation, the safe bet was studying a STEM degree at a good school, then working in financial services or big tech.

Looking along the length of the supply chain in both industries, the majority of existing positions are vulnerable to the helping hand of AI. For each survivor who is 20x more productive, there will be 19 former colleagues who no longer need to worry about the return of presentism, corporate “return to the office” mandates, and the like. After all, AI won’t have suddenly made the firm’s addressable market size 20x larger!

Those roles that look like safer prospects? The time-honoured human-centric roles of sales, management, and leadership.

Technology may change how we do things, but not the what and the why.

--- Tell your friends ---

Next Post

Previous Post

27 Comments

  1. Jeremy 24 January 2025

    We’re still here 🙂

    Great to read your work as always. I listened to a podcast recently which explored this idea further. The guest speaker’s answer to “what should my children study” is entrepreneurship. It’s the best answer I’ve heard yet.

    https://overcast.fm/+AAKhaEnWwEk

    • {in·deed·a·bly} 25 January 2025 — Post author

      Thanks Jeremy. Entrepreneurship combines all three of those survivorship functions, so is a reasonable call.

      In truth, most of us are followers at heart rather than visionary leaders, and risk averse in the sense we are happy to work in someone else’s business where it is someone else’s capital at risk.

      I’ve worked on both sides of that particular coin. Running your own show is amazing and rewarding, but working in someone else’s business is much easier and less stressful. Entrepreneurship is the only road to true riches, but we can get pretty far down the road to affluence without the risk by being a follower in the right niche.

  2. MIchael 25 January 2025

    Great to see you back, hopefully it won’t be 6 months to the next instalment. I will probably make it to the finish line before AI and off shoring devours my industry, but worry about the impact on my children’s future; not everybody is cut out to lead, manage or sell.

    • {in·deed·a·bly} 25 January 2025 — Post author

      Thanks Michael.

      Most of the existing roles will still exist in an evolved form, just with many fewer people performing them. Which increases competition and potentially reduces opportunities. Most of the rest will find something else to do, it just may not be knowledge work as we currently recognise it.

  3. Ben Swain 25 January 2025

    Great to see you back.

    I think, in general, and viewed from a position in 50 years time, what you’re basically saying here will be seen to be true. Robots will do the stuff that can be set as rules. But I think it will rumble along fairly slowly, giving people time to adapt. I don’t see the cliff edge which seems to be all the rage. I have a little chuckle to myself every time someone says researchers think AGI will be here in 3 to 4 years – they’ve been saying the same for decades and we’re a long way off still.

    ‘AI’ has been around a little while by now (decades in one form or another) and the current iteration (these massive large language models) 5 years perhaps – yet I’m not seeing this huge productivity improvement or culling of workforce. Just look at economic growth and unemployment rates in the UK; no movement. I look at where I work – barely anything new is being introduced, if I want to use Co-pilot I have to do it on my personal laptop…

    So I think you’re right overall, tech advances will change the work landscape. I just think we’ll adapt like we did with all the other tech advances. Who knows what the adaptation will be? Universal income? A share in profits of the labour of robots? A much smaller population (its going that way anyway soon)?

    • {in·deed·a·bly} 25 January 2025 — Post author

      Thanks Ben.

      You’re right, AI has been around for decades, and automation (which is what this really is) for far longer. It is a pattern we’ve seen play out many times.

      The timescales are protracted and uncertain, and just because something can be automated doesn’t automatically mean it will be.

      Offshoring has been a model for longer than I’ve been working, yet it is only post-pandemic that I’ve seen it start to widely impact local wages and opportunities in my world, with ever fewer sites in my sector hiring onshore technical resources for project work (the “run” side had long since gone overseas). Some of that is likely cyclical as we head into a likely recession, but this time and particularly in the UK I suspect much of it is structural change.

      The idea of a Wall-E or Knight Rider style autonomous car remains fanciful, but for most of the specific discrete tasks that knowledge workers perform, that level of advancement simply isn’t required. It is more like the change from scythe the push mover to lawnmower, solving particular niche problems one at a time, and eventually ending up with a Swiss Army knife or Thermomix that combines a bunch of them into a saleable product. As each of those tasks are solved at a cheap enough price point, you need less workers performing them. That is where we are at in the maturity curve today.

  4. Heather 25 January 2025

    Good to see you back, and always insightful to read your thoughts.

    Yes, most of your thoughts resonate with me personally and professionally.

    Please keep on writing 🙏

  5. Sas 25 January 2025

    Great to hear from you. Interesting thoughts on where to direct our children. I do hope change comes slowly given the risks of fast change with our current cohort of world leaders

    • {in·deed·a·bly} 25 January 2025 — Post author

      Thanks Sas. “Fast” is relative, in the grand scheme of things.

      Like the internet, this represents a generational change. It will feel uncomfortably fast to those who living through it, and normal to those who grow up already accustomed to it.

      But the thing is, we only have that naïvety of youth once. Where the world appears to be a meritocracy full of opportunities and anything is possible. Then we graduate (maybe, the cost/benefit of incurring the student debt may no longer add up as AI gradually does more of the doing). The real world repeatably punches us in the face, until we quickly learn that being a grown-up mostly involves making it up as we go along. Guesswork. Luck. Blindly stumbling in the wake of an endless series of externally induced events. We may make plans, steer and hope, and eventually arrive somewhere resembling what we hoped for, but mostly learn to roll with it and make do wherever we happen to find ourselves.

      The good news is whatever education choices our kids make, once they graduate they will realise they don’t actually know how to do anything marketable, and providing they got reasonable marks will become whatever the next equivalent of being a management consultants in the city happens to be. For those with the get up and go to pursue it, those opportunities are increasingly found abroad rather than in the UK, but statistically speaking most Brits don’t finish all that far from where they started.

  6. Ben Hoyle 25 January 2025

    I’d disagree a little with “Instead, they apply rules-based judgement to a simple repetitive task”. The power and point of “AI” is it is not rule-based but statistics -based, and learnt-statistics-at-multiple-levels-based.

    What matters is not whether you can define the rules (always a problem in reality), but do you have big enough examples of input and output, and can you intelligently describe the process in words? If so you are mostly toast to off-the-shelf systems. There will be a need for humans-in-the-loop, but they will be overseeing 20x throughput (as you indicate).

    I do agree with it being rare that tech delivers in its promise, and that “AI” is rare because it is delivering, and has done for the past 10 years.

    It is also quickly fading into the background. I’m amazed with how much of my day I rely on “AI” – sanity checking, improving my emotional intelligence & communication, general “common sense, good enough” advice. But I can always sense the dark side – what happens when everyone prefers the AI-augmented me to the output of the normal me, when there is now a quality jump and old fashioned mediocre knowledge worker (or even high-flying non-augmented superstar) is not “good enough”.

    Growing up and being exposed to the start-up and business world, one of the more surprising realisations is it’s basically the same power structures as it always has been (since dawn of cities) – networking high-net-worth folks with the well-educated technically-able (and many laundered duds). In person. Look at most countries and this is clear about politics as well. The last bastions of human-mediated contact, only open to a lucky few. Everyone else gets smart glasses and sex bots and synthetic emotion (normally rage & lust).

    There should have maybe been a point. I didn’t pump the rambles through the AI…

    • {in·deed·a·bly} 25 January 2025 — Post author

      Thanks Ben.

      Early on, the promise of AI appeared to be moving from a traditional model “tell the computer the rules, and having it apply them to find the answer” to one resembling “explain a problem to the computer, and it will tell you both what the answer should be and (if asked) infer or deduce what the rules should have been to get there“.

      With training, sufficient sample quality, governance, and a little luck the answers should come out the same. The difference was we no longer needed the smart person to first articulate the rules required to get us there. Instead, they could be inferred using statistical techniques as you describe.

      The lived reality, at least in this phase of the practical implementation of these technologies to automate specific tasks, has tended to be augmentation of smart humans rather than replacing them. For example, reading legal wordings and comparing against a library of “known good” clauses, to identify divergence, focus attention of the human in the loop, and suggest improvements based on training data and statistics. A lawyer is still there, just having their attention focussed for them, and much of the mechanical scutwork done for them, allowing them to get through [x] more contracts per hour, and in turn see [y] of their colleagues performing equivalent tasks become surplus to requirements.

      The next phase would potentially seek to remove the human entirely, but we’re still a while away from that.

      You’re absolutely right about while some things change, the human parts remain the same. Has ever been thus.

    • {in·deed·a·bly} 25 January 2025 — Post author

      But I can always sense the dark side – what happens when everyone prefers the AI-augmented me to the output of the normal me, when there is now a quality jump and old fashioned mediocre knowledge worker (or even high-flying non-augmented superstar) is not “good enough”.

      This is a truly fascinating question.

      It is one we see repeatably playing out in the culture wars. The over-rehearsed, polished, vetted, and oh so bland media-trained responses of our sporting heroes, celebrities, and politicians is a prime example of this in action. In person, they can’t live up to that perfect persona. Inevitably disappointing some, perhaps during a “hot mike” moment, or when making an unguarded off-the-cuff remark.

      But there are outliers, characters like the former Formula-1 driver and Netflix star Daniel Ricciardo, who we are drawn to because they appear genuine rather than false, and are willing to say what they think. An approach that seems wonderfully novel, until they happen to say something contrary to ever changing acceptable culture mores.

      Like any skill, some will adapt and be able to incorporate “AI-enhanced” me in person. The rest of us will further retreat into our virtual life of electronic rather than in-person interactions, bolstering the ranks of those screen-addicted zombies dawdling through train stations at peak hour while glued to their screens, as the grey mob surge around them on their way to/from the office.

  7. David 25 January 2025

    Imaginary friend standing up here to say that we exist. And are very pleased to see Indeedably back, whatever shape he might be in. There were many sad interim visits to the site only to find nothing new from Indeedably so it was wonderful to see something pop up.

    I, also, believe the above to be true. And, as a lawyer, I am in one of the professions that the AI enablers desperately want to conquer given how lucrative that will be. It’s interesting to see the early attempts, some of which are helpful (Henchman style) for various tasks and, others, while helpful, not really adding much (copilot or chatGPT can solve the “starting from a blank page” conundrum – it’s always easier to review and correct than to commence writing – but still hallucinate too much and cannot get close (yet?) to providing anything useful on the very complex “how do I solve for this new fact pattern” things that are somewhere around 70% of my day).

    On the wrong side of 50 (just, only just) I’m fairly confident that it won’t be AI that prevents me from retiring on my terms (there are innumerable tried and tested things, office politics, economy changes, good old fashioned messing up etc., that might though). Those that are starting just now …

    Part of the conundrum with AI is that it can take an expert to see if (and where) the AI has fouled up. And that ability to see or just say “can’t be right” comes from years of looking at problems, seeing how solutions have played out and drawing the parallels and then going off to work out if it’s wrong, where and why. AI’s not currently that good at parallels. Although, I do think AI will mostly get there – to the “good enough for most practical purposes” stage (and probably quicker than I think). But, there’s a big part of the rub, we might never really know when it does get there as the expertise to check it will be thin on the ground. Good enough quality works when it reliably solves for, say, 85+% of cases. But then everyone relies on it for all cases and working out if you’re in the 85+% is a more expensive luxury – why bother spending the time and cash on ever more rare and expensive resource to confirm your solution works given you almost certainly are in that 85+% and it does. At that point you only figure out if you’re not in the 85+% if you’re wealthy enough or the issue is important enough (and you’re wealthy enough). And, what people (me included) think is important enough to spend time and money figuring out is very different before that something becomes a problem v’s after. Either way, you end up in a place where only the wealthy (or well connected) can afford the expertise (both AI and human handler) to ensure their solutions work all the time not just 85+% such that if the manure hits the spinning object affixed to the ceiling their side gets to “win”. Everyone else is relying on being in the thick part of the bell curve. AI will learn from this and build its mouse traps better and the 85+% solution will move to a 90+% solution but (I think) there will be a limit to what it can get to. Summary, adding AI seems to likely to 1) decrease the numbers of those making it to the middle class and the moderately wealthy, 2) entrench and increase the gap between those that do make it to wealth and can afford AI + the best human handlers over those who don’t and can’t.

    As for what it all means for the kids and society – Brexiteers and Trumpanzees indeed. And hard to feel that those that get left behind don’t have a point of sorts if the big winners take their winnings and wrap themselves and their chosen ones (family, mates, class, countrymen, whatever) inside moated gardens.

    • {in·deed·a·bly} 25 January 2025 — Post author

      Thanks David. Astute comments there, I agree with all of it.

      I think in many niches you end up with the situation currently experienced by the dying breed of COBOL programmers who keep all those mainframes buried deep in the heart of many financial institutions ticking over. There aren’t many of them, but they make bank.

      That said, I know of at least two initiatives where sites are actively using AI to reverse engineer and explain the dark arts performed by their mainframes, with a view to hollowing them out and switching them off. Part of that is driven by the wonders of current technology capabilities, and part from fear that eventually we’ll run outlive the remaining cabal of smart greybeards who still speak mainframe.

      Once again, the only people qualified to call bullshit on the efforts of the AI are the same folks being replaced by it. There is definitely a misaligned commercials question to resolve there!

      Good luck surviving the institutional silliness and corporate politics on the downhill run towards retirement. One of the nice things about age and affluence is have the confidence to care less about the noise, and the means to walk away if it all gets too much. Those a bit younger and a bit poorer simply cannot enjoy that option to the same degree.

  8. Gnòtul 26 January 2025

    We’re still here and crave for more, Indeedably – Thanks for another gem! 🙏🏻😇
    Love the topic and discussion – I agree with your assessment.

    However, I’ve also consistently observed the general inertia mentioned by some of the other posters in the comments. I bet in my current work place less than half the folks use (dabble with?) AI on a regular basis.

    Another observation is that lots of current jobs are indeed of the “BS” type.. creating a volume of tasks and activities that quickly becomes obviously superfluous in times of crunch. So universal basic income may be the way to go anyway at some point.

    I don’t know.. the direction of travel towards polarization of wealth and society was written in the genes of this system embraced by the almost totality of humans at this point. Capitalism.. a bit like Democracy, “terrible except for all the alternatives” and all that jazz.

    As always the recipe for sanity – if not content – would be to focus on what we can somewhat influence and try not to lose sleep over the rest.

    • {in·deed·a·bly} 26 January 2025 — Post author

      Thanks Gnòtul. Wise words, and a good take.

      The bullshit jobs thing is a fascinating phenomenon. I was travelling through South East Asia recently, and was stunned by the sheer number of people employed in busywork jobs, for example 50 shop assistants when two would do, or 10 street sweepers cleaning the same already clean section of pavement. It kept people busy and out of trouble I suppose, and gave them a source of income and maybe pride, but as you observe were 90% of them be made redundant nobody would notice in terms of value or service rendered.

      It has been fascinating observing the recent wave of big tech job cuts, without outwardly noticeably changing the quality or reliability of service. Twitter is the poster child for this with its 80% staffing cut, but collectively reminded me of that scene from the tv show Silicon Valley where a bunch of highly paid tech bros camped out on the roof of their building for 6 months drinking beers and drawing salaries, and nobody noticed.

      The challenge with universal basic income is how to pay for it. Society already provides it in various guises: the age pension, disability assistance, unemployment benefits, etc. Not enough to be comfortable, but just enough to not starve. Will the givers really accept supporting all the takers? Sweden was probably the closest I’ve seen to a society that once embraced that ideal, but in the last couple of decades that social contract seems to be fraying as the volume of takers increased and social disharmony accelerated. Of course there are other factors at work there, far right politics, poorly managed immigration and integration, and so on.

      One thing COVID showed (in London at least) was problems like homelessness were solvable, society just chooses not to. It is a prioritisation question, scarce resources meaning somebody always misses out. If the sci-fi imagineers have it right, eventually scarcity gets solved, at which point the biggest issue becomes boredom. The motivated jump on space ships and set out to explore the stars. The rest find some other reason to fight amongst themselves, it is human nature.

  9. UNSOLICITED MENTEE 27 January 2025

    Thanks for another great post, please keep them coming. The times I checked for the new article, I almost resigned to asking the AI overlords how to get notified in case a new article pops up – an activity I last performed some 20 years ago when blogs were the hype.

    I agree with the most commentors though, that even though professions will be ready to get replaced by AI powered blokes with productivity ratios of 20:1, it will most probably take decades to do so, most of us I assume already have numbers of completely useless colleagues already chilling on the org payroll for years.

    On the grim side I see you predict the professions with human interaction to prevail. Dealing with people. Why is it always the hardest jobs that never go away? 😀

    Lastly, the world will be a nicer place if you are just 4,9x more productive than your peers and spend the remaining 0,1 on writing here!

    • {in·deed·a·bly} 27 January 2025 — Post author

      Thanks Unsolicited Mentee.

      Funnily enough, I was at a conference today where (amongst other things) these very themes were amongst the main topics of discussion.

      The number one AI risk cited was the “grey wave” of retirements rolling through the old guard of subject matter experts. Turns out they are the only folks with the requisite knowledge to spot when the AI toys tell fibs.

      This was partially coupled with the presentism/”return to work” mandates. The C-suite attendees present arguing (amongst other things) that back in the day juniors got trained on the job via sitting alongside and being shouted at by seniors. Since COVID, that hasn’t happened, so the junior-to-mid-level minions don’t possess the word-of-mouth style apprenticeship worth of institutional wisdom from the grey beards. Which exacerbates the AI knowledge gap problem.

      Interestingly, the feeling was that was a short term problem, as in ~10 years time it would be the AI models that would be playing the role of the greybeard. One part happily answering silly questions from the kids, the other part looking over their shoulder and (based on statistics and observed patterns) telling them when they were doing it wrong. Before eventually they could do away with a large portion of their “biological interfaces” altogether.

      I think they are directionally correct, though like several of the commenters here have suggested, the variable is timing. The biggest barrier to adoption is people and working culture. Change management is one (hard) way to solve that. The easier path is getting rid of the people. Given the breathtaking pace the technology capabilities are advancing, I suspect many of us working in the bullshit jobs at the dinosaur firms may have found themselves disrupted out of a job (Amazon style “your margin is my opportunity“) long before the robots take over their roles.

  10. Fire And Wide 28 January 2025

    Howdy Internet Stranger 🙂

    Yep, some of us are still out and about enjoying life, especially when I get to read an Indeedably post!

    Personally, I largely love AI – it’s just fascinating how far it’s come already and I know I’m only seeing the edges of the ‘real’ stuff available. What I find especially interesting is that it’s better at some things I wouldn’t expect and terrible at others I think it should handle with ease.

    The really interesting part is it’s bizarrely great at teaching me how much I ‘assume’ in interactions. The difference between working with AI and some of constraints/guidelines it needs to get good results are often things you assume are just ‘obvious’. An interesting realisation of how much we assume at times. And as you and others have said, having the experience to know when it’s returning rubbish……which is where your grey=haired army of SME’s come in…..until they don’t!

    Would I be concerned if still working? Maybe – I know a lot of my friends are far more split 50/50 on the good/bad debate. Those working human-based or physical jobs, far less than the office workers. The older ones more against, the younger ones more pro.

    I think like most things, it will happen faster than some folk will want and slower than others hope it will. It is for sure, one of the few changes which feels like in ten years time we’ll all be thinking it’s normal, like the internet and phones. And it’ll be different to anything we predict now!

    Thanks for writing – as a very slothful blogger I appreciate the effort!

    Cheers, Michellle aka Fire&Wide

    • {in·deed·a·bly} 29 January 2025 — Post author

      Thanks Michelle.

      It is fascinating how quickly we took to some aspects of AI. The YouTube style “just-in-time” knowledge transfer, only faster. “Explain this to me“. “Summarise that“. “Read this so I don’t have to“. “Given x, y, and z constraints solve this problem for me“. “Here is some code not doing what I want, debug it for me“.

      Convenience enablers all.

      The challenge becomes when we can no longer write or understand the code that is buggy ourselves (see last week’s ChatGPT outage impacting the development community), when the shift from a tool to a dependency occurs. Which it will. Not bad necessarily, after all none of us understand how our cars or wifi work. The complexity just gets abstracted, and a small pool of niche specialists understand it while the rest of us become blissfully ignorant. Also a risk, see the COBOL programmer problem.

  11. weenie 28 January 2025

    Welcome back, indeedably!

    Your comment on Monevator alerted me of your online return!

    An excellent read, your posts have been missed.

    A lot of folk at work use AI (crafting emails, doing presentations etc) but it’s not something I use. I’m not sure it would make my job any easier. Personally, I’ve just used it for travel itineraries, recipes and suchlike haha!

    As for what industry to get into for the young uns? Can AI evolve to do plumbing, fix a blocked toilet, leaky tap? Jobs which need fingers and thumbs, not just brainpower!

    • {in·deed·a·bly} 29 January 2025 — Post author

      Thanks weenie.

      Trades are good for some, you’re certainly correct the toilets will always need unblocking. Much like office work spending days driving a computer, not everyone is cut out for a career of heavy lifting outside in the rain. Carpet laying or funeral directing is where the margins are, so I’m told.

  12. Rhino 28 January 2025

    I’ve re positioned myself into the arena of AI assurance. Hopefully a sector that will grow in importance rather than diminish as AI becomes more prevalent.

    • {in·deed·a·bly} 29 January 2025 — Post author

      Thanks Rhino, that is a great call.

      Regulator: “why did you do that?”
      Regulated: “because the computer told me to”
      Regulator: “WTF?”
      Regulated: “🤷…”

      Given the volume of unfortunate stories about lemmings driving off bridges or cliffs because Google Maps told them it was the shortest route, I think you’re onto a gold mine there.

  13. John Smith 3 February 2025

    870 / 5,000
    Hi indeedably!, I’m alive too. I think A.I. will stagnate, because it has exhausted the virgin sources of information. Now it only processes regurgitated info. If AI takes off then it is not justified (time, money) university education for the masses.

    Interesting that after 65+ years you are sent to retirement, but it does not apply to morally worn out political leaders with imperial pretensions (USA/RUSSIA/CHINA).

    Humanity has been adept at war since it was born. Leaders will keep the keys to AI – just as they now insist on crappy IP (intellectual property) – to continue the old slavery (money printing) in another form.

    The real danger may be when humanity is no longer necessary (old, sick, uneducated) and AI leaders will want to minimize the unprofitable ballast (unemployment, sick leave, training). For this, various methods will be applied directly (war) or treacherously (pandemic).

What say you?

© 2025 { in·deed·a·bly }

Privacy policy

Subscribe