When I was a kid, I used to love playing computer games.
Commodore 64 plugged into an ancient colour television located in the living room. Competing for screen time with family members who wanted to watch something on one of the two available television channels. Missing out whenever there was news, cricket, or any form of football being broadcast.
Games loaded from cartridges. Cassette tapes. Or manually typed in from source code listings found published within Zzap 64 magazines, imported by local newsagents months after their release date.
Before long, my interests shifted from playing games to puzzling out how they worked. How did they keep score? Interpret and apply game rules? Detect collisions between characters on screen?
I taught myself how to interrupt games and peek behind the curtain at their code. Granting extra lives. Correcting coding bugs. Creating and loading new maps or levels into games I particularly enjoyed.
In those days long before the internet, this involved a little bit of research and a lot of trial and error. In the absence of smarter mentors or more experienced peers, I spent an inordinate amount of time reinventing the wheel. My solutions were rarely the fastest or most efficient, but they worked. Mostly!
The experience was both a blessing and a curse.
It taught me applied problem solving. Research skills. Patience. Determination. All in a way that my teachers or parents could never have done. These were problems I wanted to solve. Challenges I set myself. Nobody cared if I succeeded or failed, nor how long it may take. Nobody was watching.
The downside was by the time my high school, and later university, began offering computing courses I knew enough to recognise it was being taught wrong. Simultaneously killing the joy of learning, while failing to equip students with the basic competencies required to be useful to a prospective employer.
After a false start in an accounting career, I eventually fell back into computing.
In theory, getting paid to do something you love is a recipe for winning at life.
In reality, flipping the switch from choosing to do a thing to having to do that thing eliminates the joy. Turning it into a job. With all the externally imposed deadlines and self-imposed stress that any job brings.
Creativity replaced by industry best practices. Accepted design patterns. Right-sized solutions, based on value. A graduate with an Excel spreadsheet is the right answer to far more technical problems than we care to admit!
Choosing my own adventure substituted for solving someone else’s problems.
Implementing someone else’s designs.
Using someone else’s choice of tooling.
Following someone else’s preferred approach.
No longer an artist. Free to make it up as I went along. Driven by little more than impulse and interest.
Now a worker. A small commodity cog in a large machine. Recognising that “selling out” is really just growing up. Trading ideals for income, needed to pay bills and meet responsibilities. Starving artists are noble in concept, but cold and hungry in practice.
Professionally, I stepped away from the tools before long.
The constant need to re-skill to remain relevant was draining. Technology falls out of fashion at least as quickly as the sartorial splendour showcased on designer catwalks and department store windows.
The existentialist threat of outsourcing, offshoring, and one day perhaps self-programming robots.
The realisation that code was a lingua franca. Fluently spoken by smart programmers the world over, many of whom could afford to work for significantly less than I could.
I traded coding problems for business problems. Swapping algorithms for operating models. Modules and functions for controls and monitoring.
More lucrative. Less satisfying.
“Buy” replacing “build” as my default choice for resolving problems. There was no more value in reinventing answers to solved problems today than there had been when I was a kid learning to program.
Over the years, programmers have periodically rebranded themselves with ever more self-aggrandising job titles.
Each successive title is an attempt to trade on the credibility of more established professions. Credibility earned by those who had plied their trade since long before computers needed programming.
Concurrently, the act of programming has become significantly easier with each successive generation. Languages and frameworks provide building blocks, guard rails, syntactic sugar, and template-driven solutions. Assisting the programmer to achieve their aims faster than ever before.
Programming has evolved in a similar manner to driving a car, where today the average person on the road has little idea how the vehicle they operate actually functions. Upsetting the old guard. Lowering the barrier to entry for the new.
Throughout that time I have occasionally dabbled in development. Not professionally to any great extent, but rather returning to my roots by indulging the occasional passion project.
For example, a friend had long used a commercial stock market analysis program. He had paid for the license, which included automated downloads of prices, volumes, and financial ratios for all manner of stocks, options, and futures.
Years later, the vendor decided to jump on the subscription-based pricing model bandwagon. Low cost. Long tail. “Rent” rather than “buy“. A fashionable business model, that smooths out the lumpy cashflow profile of old-school “one and done” software sales.
There was just one problem. To encourage the take-up of the new model, the vendor sunsetted the download functionality in their legacy product. The old software remained perfectly capable of analysing investments, but it no longer had a means of refreshing those analyses with current data.
As a favour, I pulled his application to pieces. Attempting to work out how the program’s data download functionality had once worked. Then I wrote a simple program to produce an identically shaped output, harvested from free sources on the internet. With little fanfare, normal service was restored to the software he had long ago bought and paid for.
Lately, I’ve observed an inordinate number of shysters, spruikers, and self-promoters promising to unlock the secrets of a host of AI-related natural language models. Self-described “prompt whisperers”, who claim to possess the ability to ask the AI gods for miracles, and actually be heard.
They are running a similar business model to the SEO-charlatans and early retirement gurus who once plagued the personal finance blogosphere, back in the days when there was still a viable audience for long-form blog posts instead of click-bait Twitter threads and breathless shouty 20-second TikToks.
Which is the same business model priests and Shamen have run since the beginning of time.
This week I decided to devote some time to learning how to drive some of those large language models. To apply them to the production of workable technical solutions to tangible problems. To see whether there was any substance to what the influence purveyors’ promised.
Could it be made to work? Was it viable? Efficient? Worthwhile?
Previously I had played around with ChatGPT to produce generic written content, articulated in a handful of different forms and voices. Bland. Repetitive. Sometimes entertaining. The sort of thing content farms and freelance writers have mindlessly churned out since the advent of Fiverr and UpWork.
Today I set out to see if I could use an AI to produce a non-trivial working computer program.
I had read somewhere that a good place to start is asking the AI to generate a prompt to ask an AI to do something for you. This sounds very meta, but potentially makes sense were there really “secrets” to instructing an AI.
The results were ok, but not exactly what I wanted.
A bit of trial and error revealed a conversational approach got the job done far more effectively than attempting to generate the prompt all in one go. A technique badged “prompt chaining”. Clarifying. Correcting. Refining. Tweaking. Not dissimilar to trying to teach long division to a seven-year-old!
Once I was happy that the articulation of the AI-generated prompt matched the outcome I sought, I copied it and fed it back into the AI.
My technical project? The recreation of that financial markets file for my friend’s analysis software. A problem I already had a working answer to. One that I well understood the steps involved in solving.
The first couple of outputs were underwhelming. The AI struggled with producing complicated things.
So I stepped back. Decomposed the big problem into smaller bite-sized chunks. Repeated the iterative prompt generation exercise. Then fed the prompt for the first component into the AI.
The result was more than adequate. Fit for purpose. Functional. Good enough.
Written in the right programming language.
Using the right libraries.
Following best practice conventions and guidelines.
Variable and function names matching their purpose.
Algorithm design that was simple, yet elegant.
Code comments clearly explaining what each moving piece was supposed to do.
A cursory read revealed something that looked like it would get the job done.
A detailed review left me moderately impressed. The code was better than I would expect from a junior developer or programmer body-shopped from any of the Big4 consultancies.
After reviewing the code a third time, I saved it to a file and copied that onto a temporary ring-fenced development environment. Quarantining code of uncertain provenance. Minimising the blast radius should something go wrong.
Then I gave it a run.
Accepting the inputs I supplied. Performing the functions I requested. Producing the output I required.
I sat back and looked through the code once more. What would I change? Where could I improve it? What value could my 20+ years’ worth of knowledge and experience contribute to improve the result?
After a few minutes, I was forced to accept an inconvenient truth. The tweaks I would make were of the style over substance variety. Much activity, but adding little value.
I compared the AI-generated program to the version I had hand-coded years before.
The AI program was 17 lines of code shorter than mine.
The time taken and computing resources consumed during execution were virtually identical.
It was humbling to admit that the AI’s capabilities had equalled, if not exceeded, my own.
Next, I tried something new. A piece of functionality I had long thought about, but not gotten around to implementing in the personal finance application I have been tinkering with for several years.
Following the same general pattern:
- Conversationally generate a prompt, refining the articulation of the requirement
- Executing the prompt to receive some source code
- Tweaking and refining the prompt some more, to achieve an outcome closer to what I wanted
- Reviewing the generated code for traps and pitfalls
- Copying the output and taking it for a test drive
Same pattern. Same result. A functional algorithm that answered the exam question.
Total time taken? Twenty minutes. A result it would have taken me two or three hours to accomplish.
Finally, I decided to try a tricky problem. A feature I had been keen to implement, but despite multiple attempts over the years, had been unable to figure out an approach I was happy with.
Again, I followed the same model.
Generate the prompts. Run them and refine some more. Then play with the generated code.
In less than an hour, I had tried out four separate approaches to the problem.
Discarded two as dead ends.
Played with and refined a third, until I had something that sort of worked. Inelegant, like a tractor competing in a Formula 1 race.
Then I tried something different.
Asking the AI for a solution, given the exam question and the approaches I had already ruled out.
It generated something I hadn’t expected. Using a design pattern I was familiar with, but hadn’t thought to apply to the scenario.
To me, it seemed a little out of left field. Lateral thinking or fresh eyes. But it worked.
Total time taken to develop a functional solution? Working on my own, I had unsuccessfully invested weeks. It took the AI less than two hours, and most of that was consumed by me refining the prompts.
Total number of code lines I had to write to reach the solution? Zero.
I was happy with the outcome. More than happy. Already I could see a path to completing the project within a few short weeks, were I to prioritise sufficient time and attention upon it. Without needing to hire developers.
Yet I couldn’t help feeling disquieted by the implications. The days of the junior code monkey are over. The creative aspects of development will shift towards correctly understanding and articulating the requirements. Refining designs. But feature very little hacking of code.
Which, when I thought about it, was where the value-add had always truly been. Finding and solving the right problems. Doing the thinking, rather than doing the doing.
No longer mindlessly reinventing the wheel.
Representing a big change in approach. But on reflection, a positive one, in a similar manner to the way YouTube videos have improved DIY or Atom Learning is helping solve primary school homework.
The exercise had been an education. The lessons learned were enlightening, opening my eyes and immediately changing the way I approached something I had been successfully doing for most of my life.
It is time to re-skill. Being able to apply this technology will soon be a core competency. An important life skill, like being able to swim, research using the internet, or drive a car.