When I was a kid, I used to love playing computer games.
Commodore 64 plugged into an ancient colour television located in the living room. Competing for screen time with family members who wanted to watch something on one of the two available television channels. Missing out whenever there was news, cricket, or any form of football being broadcast.
Games loaded from cartridges. Cassette tapes. Or manually typed in from source code listings found published within Zzap 64 magazines, imported by local newsagents months after their release date.
Before long, my interests shifted from playing games to puzzling out how they worked. How did they keep score? Interpret and apply game rules? Detect collisions between characters on screen?
I taught myself how to interrupt games and peek behind the curtain at their code. Granting extra lives. Correcting coding bugs. Creating and loading new maps or levels into games I particularly enjoyed.
In those days long before the internet, this involved a little bit of research and a lot of trial and error. In the absence of smarter mentors or more experienced peers, I spent an inordinate amount of time reinventing the wheel. My solutions were rarely the fastest or most efficient, but they worked. Mostly!
The experience was both a blessing and a curse.
It taught me applied problem solving. Research skills. Patience. Determination. All in a way that my teachers or parents could never have done. These were problems I wanted to solve. Challenges I set myself. Nobody cared if I succeeded or failed, nor how long it may take. Nobody was watching.
The downside was by the time my high school, and later university, began offering computing courses I knew enough to recognise it was being taught wrong. Simultaneously killing the joy of learning, while failing to equip students with the basic competencies required to be useful to a prospective employer.
After a false start in an accounting career, I eventually fell back into computing.
In theory, getting paid to do something you love is a recipe for winning at life.
In reality, flipping the switch from choosing to do a thing to having to do that thing eliminates the joy. Turning it into a job. With all the externally imposed deadlines and self-imposed stress that any job brings.
Creativity replaced by industry best practices. Accepted design patterns. Right-sized solutions, based on value. A graduate with an Excel spreadsheet is the right answer to far more technical problems than we care to admit!
Choosing my own adventure substituted for solving someone else’s problems.
Implementing someone else’s designs.
Using someone else’s choice of tooling.
Following someone else’s preferred approach.
No longer an artist. Free to make it up as I went along. Driven by little more than impulse and interest.
Now a worker. A small commodity cog in a large machine. Recognising that “selling out” is really just growing up. Trading ideals for income, needed to pay bills and meet responsibilities. Starving artists are noble in concept, but cold and hungry in practice.
Professionally, I stepped away from the tools before long.
The constant need to re-skill to remain relevant was draining. Technology falls out of fashion at least as quickly as the sartorial splendour showcased on designer catwalks and department store windows.
The existentialist threat of outsourcing, offshoring, and one day perhaps self-programming robots.
The realisation that code was a lingua franca. Fluently spoken by smart programmers the world over, many of whom could afford to work for significantly less than I could.
I traded coding problems for business problems. Swapping algorithms for operating models. Modules and functions for controls and monitoring.
More lucrative. Less satisfying.
“Buy” replacing “build” as my default choice for resolving problems. There was no more value in reinventing answers to solved problems today than there had been when I was a kid learning to program.
Over the years, programmers have periodically rebranded themselves with ever more self-aggrandising job titles.
Developer.
Engineer.
Scientist.
Each successive title is an attempt to trade on the credibility of more established professions. Credibility earned by those who had plied their trade since long before computers needed programming.
Concurrently, the act of programming has become significantly easier with each successive generation. Languages and frameworks provide building blocks, guard rails, syntactic sugar, and template-driven solutions. Assisting the programmer to achieve their aims faster than ever before.
Programming has evolved in a similar manner to driving a car, where today the average person on the road has little idea how the vehicle they operate actually functions. Upsetting the old guard. Lowering the barrier to entry for the new.
Throughout that time I have occasionally dabbled in development. Not professionally to any great extent, but rather returning to my roots by indulging the occasional passion project.
For example, a friend had long used a commercial stock market analysis program. He had paid for the license, which included automated downloads of prices, volumes, and financial ratios for all manner of stocks, options, and futures.
Years later, the vendor decided to jump on the subscription-based pricing model bandwagon. Low cost. Long tail. “Rent” rather than “buy“. A fashionable business model, that smooths out the lumpy cashflow profile of old-school “one and done” software sales.
There was just one problem. To encourage the take-up of the new model, the vendor sunsetted the download functionality in their legacy product. The old software remained perfectly capable of analysing investments, but it no longer had a means of refreshing those analyses with current data.
As a favour, I pulled his application to pieces. Attempting to work out how the program’s data download functionality had once worked. Then I wrote a simple program to produce an identically shaped output, harvested from free sources on the internet. With little fanfare, normal service was restored to the software he had long ago bought and paid for.
Lately, I’ve observed an inordinate number of shysters, spruikers, and self-promoters promising to unlock the secrets of a host of AI-related natural language models. Self-described “prompt whisperers”, who claim to possess the ability to ask the AI gods for miracles, and actually be heard.
They are running a similar business model to the SEO-charlatans and early retirement gurus who once plagued the personal finance blogosphere, back in the days when there was still a viable audience for long-form blog posts instead of click-bait Twitter threads and breathless shouty 20-second TikToks.
Which is the same business model priests and Shamen have run since the beginning of time.
This week I decided to devote some time to learning how to drive some of those large language models. To apply them to the production of workable technical solutions to tangible problems. To see whether there was any substance to what the influence purveyors’ promised.
Could it be made to work? Was it viable? Efficient? Worthwhile?
Previously I had played around with ChatGPT to produce generic written content, articulated in a handful of different forms and voices. Bland. Repetitive. Sometimes entertaining. The sort of thing content farms and freelance writers have mindlessly churned out since the advent of Fiverr and UpWork.
Today I set out to see if I could use an AI to produce a non-trivial working computer program.
I had read somewhere that a good place to start is asking the AI to generate a prompt to ask an AI to do something for you. This sounds very meta, but potentially makes sense were there really “secrets” to instructing an AI.
The results were ok, but not exactly what I wanted.
A bit of trial and error revealed a conversational approach got the job done far more effectively than attempting to generate the prompt all in one go. A technique badged “prompt chaining”. Clarifying. Correcting. Refining. Tweaking. Not dissimilar to trying to teach long division to a seven-year-old!
Once I was happy that the articulation of the AI-generated prompt matched the outcome I sought, I copied it and fed it back into the AI.
My technical project? The recreation of that financial markets file for my friend’s analysis software. A problem I already had a working answer to. One that I well understood the steps involved in solving.
The first couple of outputs were underwhelming. The AI struggled with producing complicated things.
So I stepped back. Decomposed the big problem into smaller bite-sized chunks. Repeated the iterative prompt generation exercise. Then fed the prompt for the first component into the AI.
The result was more than adequate. Fit for purpose. Functional. Good enough.
Written in the right programming language.
Using the right libraries.
Following best practice conventions and guidelines.
Variable and function names matching their purpose.
Algorithm design that was simple, yet elegant.
Code comments clearly explaining what each moving piece was supposed to do.
A cursory read revealed something that looked like it would get the job done.
A detailed review left me moderately impressed. The code was better than I would expect from a junior developer or programmer body-shopped from any of the Big4 consultancies.
After reviewing the code a third time, I saved it to a file and copied that onto a temporary ring-fenced development environment. Quarantining code of uncertain provenance. Minimising the blast radius should something go wrong.
Then I gave it a run.
It worked!
Accepting the inputs I supplied. Performing the functions I requested. Producing the output I required.
I sat back and looked through the code once more. What would I change? Where could I improve it? What value could my 20+ years’ worth of knowledge and experience contribute to improve the result?
After a few minutes, I was forced to accept an inconvenient truth. The tweaks I would make were of the style over substance variety. Much activity, but adding little value.
I compared the AI-generated program to the version I had hand-coded years before.
The AI program was 17 lines of code shorter than mine.
The time taken and computing resources consumed during execution were virtually identical.
It was humbling to admit that the AI’s capabilities had equalled, if not exceeded, my own.
Next, I tried something new. A piece of functionality I had long thought about, but not gotten around to implementing in the personal finance application I have been tinkering with for several years.
Following the same general pattern:
- Conversationally generate a prompt, refining the articulation of the requirement
- Executing the prompt to receive some source code
- Tweaking and refining the prompt some more, to achieve an outcome closer to what I wanted
- Reviewing the generated code for traps and pitfalls
- Copying the output and taking it for a test drive
Same pattern. Same result. A functional algorithm that answered the exam question.
Total time taken? Twenty minutes. A result it would have taken me two or three hours to accomplish.
Finally, I decided to try a tricky problem. A feature I had been keen to implement, but despite multiple attempts over the years, had been unable to figure out an approach I was happy with.
Again, I followed the same model.
Generate the prompts. Run them and refine some more. Then play with the generated code.
In less than an hour, I had tried out four separate approaches to the problem.
Discarded two as dead ends.
Played with and refined a third, until I had something that sort of worked. Inelegant, like a tractor competing in a Formula 1 race.
Then I tried something different.
Asking the AI for a solution, given the exam question and the approaches I had already ruled out.
It generated something I hadn’t expected. Using a design pattern I was familiar with, but hadn’t thought to apply to the scenario.
To me, it seemed a little out of left field. Lateral thinking or fresh eyes. But it worked.
Total time taken to develop a functional solution? Working on my own, I had unsuccessfully invested weeks. It took the AI less than two hours, and most of that was consumed by me refining the prompts.
Total number of code lines I had to write to reach the solution? Zero.
I was happy with the outcome. More than happy. Already I could see a path to completing the project within a few short weeks, were I to prioritise sufficient time and attention upon it. Without needing to hire developers.
Yet I couldn’t help feeling disquieted by the implications. The days of the junior code monkey are over. The creative aspects of development will shift towards correctly understanding and articulating the requirements. Refining designs. But feature very little hacking of code.
Which, when I thought about it, was where the value-add had always truly been. Finding and solving the right problems. Doing the thinking, rather than doing the doing.
No longer mindlessly reinventing the wheel.
Representing a big change in approach. But on reflection, a positive one, in a similar manner to the way YouTube videos have improved DIY or Atom Learning is helping solve primary school homework.
The exercise had been an education. The lessons learned were enlightening, opening my eyes and immediately changing the way I approached something I had been successfully doing for most of my life.
It is time to re-skill. Being able to apply this technology will soon be a core competency. An important life skill, like being able to swim, research using the internet, or drive a car.
Fire And Wide 8 May 2023
Ha, we sound very similar kids – I too enjoyed many hours hacking into games and figuring out the back-door way to get past a particularly tricky enemy!
I’ve been playing with AI for a few weeks now and the more I do, the more fascinating it becomes. It’s so easy to see how useful it is for actual real world problems. I can see why people are calling for a time-out to consider how to use it but I think that genie is already well out of the proverbial bottle personally.
Last week I built an app to help people brew better beer – and I’ve never made an app in my life. Was it exactly what I wanted? No – but was it ‘good enough’ – absolutely. And all for free. I can only imagine what you can do with the paid versions.
I didn’t end up going down the coding path but have spent my fair share of time on the other side of the fence, often playing that irritating Excel geek from “the business” (ruining IT’s business case for a shiny new project!). You are spot on with that the value has always been in being able to correctly identify and describe the problem to be fixed. No fluffy, what does success look like, statements but a real, in-depth understanding.
I’m only playing for fun but I almost miss not working as an excuse to dive deeper with a purpose. It is absolutely going to be an essential skill to know how to use these new tools. And there seems a small fortune to be made for some in developing easier ways for others to interact. And it’s easy to imagine just how much trusted data sets are going to be worth?!?
I can understand why people are worried. Change is always a bumpy ride.
{in·deed·a·bly} 8 May 2023 — Post author
Thanks Michelle.
Beer brewing AI robots, now you’re talking!
I met a guy a year or so ago who made champagne for a living. He was talking about the science behind it, lots of chemistry (and sugar). One of the things he was keen to explore was using technology to improve the consistency of output, wishing to reduce or eliminate the “the 2023 vintage wasn’t a particularly good year” problem.
AI models were one of the areas he was looking into, seeking to identify the optimal combination of soil composition, elevation, temperature, humidity, light levels, and so on. Climate change was cruelling long established growing regions, while enabling new locations to start producing sparkling white wine. He was keen to understand why those locations that worked, worked so well, with a view to replicating the optimal in climate controlled greenhouses.
It will be fascinating to see where the journey takes us. One step closer to the futurist imaginings of Gene Roddenberry and Peter F Hamilton: “Computer, make me a fortune… and a sandwich!“
Ben Swain 8 May 2023
Been thinking a little about this topic of AI replacing programmers recently. I’m an analyst at a bank and much of my job is gleaning insight from data, so I can see that AI can help me do that (and probably at some point put me out of a job!). At the moment it is hard to see how (at my place of work) an AI would be able to perform complex data analysis, joining lots of tables etc unless the data is incredibly well organised/documented etc etc. Even I have to do lots of investigation to work out what data to use, what it means etc. From what I’ve seen of the current AI models, it won’t perform better than me at doing that currently. It will be great at helping conceptually though with all the stuff you discuss in this article.
But it will get there, especially if we focus on making the data clean, easy to interpret, properly labelled and defined. I can see one day being able to prompt it with ‘tell me who are the most likely 5% of customers to take up this new product’, and it doing a better job than I could in a fraction of the time.
I worry a bit there will be a disconnect between the data and the end user/use case. And that might mean the ‘wrong’ output. Fine if you market to the wrong customers, but maybe less fine if you prescribe the wrong drugs to a patient.
Things are developing quickly. 12 months ago I’d have been advising my young kids to become data scientists, programmers, software engineers. I don’t think I would today. And in 12 months time I might well be actively discouraging them.
{in·deed·a·bly} 8 May 2023 — Post author
Thanks Ben Swain. I’ve done some thinking on this also.
I spend more than a little time doing similar things. Scrounging the least wrong datasets. Munging them together. Then listening to the tale they have to tell.
How does AI change that? Based on what I’ve seen so far, I would feed it metadata to understand what it will be working from. Schema definitions, business glossaries, data dictionaries and catalogues, etc. Basically teaching it what data can be found where.
The next step would be teaching it how those disparate sets can be safely joined together. What are the natural keys? What filters, mapping, transformations need to be applied?
Then it would be a case of asking it apply the above to compile the dataset you wish to analyse. Which is the same as what analysts have been instructing report writers and developers to do for 30+ years.
I’d be willing to bet that 70-80% of your time is spent on these low value repetitive tasks. The basics required to reach the point where you can actually perform your analysis. Those low value repetitive tasks go away.
Now you are free to spend 100% of your time performing your actual analysis function, the thing they pay you the big bucks for. But you also no longer need your four fellow analysts, as their time has been “freed up to pursue more value-adding opportunities elsewhere“.
Of course commercial confidentiality, intellectual property, and privacy concepts may slow this down some, as the firm controlling the AI harvests all of the inputs to refine their models and learn.
Where it gets scary is to honestly consider the analysis you are performing. Is it unique and innovative every single time? Or is it asking the same types of questions of a new dataset over and over again? Discounted cash flows. Performance against benchmark. Financial reporting. Preparing board packs. Measuring KPIs.
If the bulk of it is repetitive and rule based, which is true of most of our jobs if we’re honest, then the AI just applies those same paint-by-numbers rules we all follow when turning the handle on our own roles. That applies equally to financial analysis as it does law and medicine, not the showboating John Grisham style courtroom part or the House MD doctoring, but the boring procedural stuff that consumes the vast majority of each profession’s time.
Leaving those who remain with more efficient tools and greatly improved productivity.
Gnòtul 9 May 2023
Amen!
Ben Swain 21 May 2023
Its a little from column A and a little from column B I suppose. I think I’m in the same thought place as you – ultimately we’ll need to teach the AI what the data is (just as I’d teach a junior analyst today). I’d love my time to be freed up to think more about what insight to provide, how to provide and present it, and what it means for the future.
The big step for me will be when banks and other heavily regulated industries start to give AI access to data. It feels like a lot of red tape will be in the way of that happening (in industries with a lot of red tape already), so I can but hope I get plenty closer to retirement before being redundant!
Thanks for your thoughts, crystal clear as always.
{in·deed·a·bly} 22 May 2023 — Post author
They already do. Data quality engines. Business rules engines. Integration platforms. Fraud detection platforms. Enterprise search tools. Even some of the newer generation data analytic/business intelligence tools. All claim to feature “AI” in some form. You can’t find a product this year at a financial services trade show who doesn’t.
They might lack the natural language parsing interface, but that will be a fast follower.
The red tape might slow AI models being run over shared datasets perhaps, but corporate confidentiality/intellectual property fears will play a part here to. As soon as real money is being saved or made, both sets of concerns will fall away in the same way privacy did.
The jobs change, they don’t go away entirely. Just learn to drive the new ways of doing things. Your analytical mindset and natural curiosity will still allow you to earn a living. The more immediate threat is the globe full of equally curious, equally analytical folks who can afford to perform the same role from a lower cost of living locale.