14 April
It was the dusted frost patterns of icing sugar; the sweet cinnamon nostalgia that you wanted to breathe in. Love, if you were still able to say it, was packed into each thing they sold.
Not everything has to be an analogy, but this one won’t leave me.
About twenty years ago, we wrote what we now call hand-crafted, artisan code. Sure, some of us were miles away from the assembly layer (apart from one module in computer science), but let’s not erase the experience. We printed angled symbols like hieroglyphs, dreaming of fluency in a new language every year - HTML, CSS, SQL, XSLT, C#. We learned formats by rote to try to make the machine translate and run our commands and our intentions.
I was always impressed by the deep knowledge of senior colleagues on the complex systems surrounding me. That kind of engineering thinking was naturally part of the work. Long, happy arguments about the best way to structure an API, or automating behaviour-driven tests, or service-oriented architecture; it was thrilling to absorb as a newer developer.
When we asked for help, we were lucky because we had physical colleagues who explained their reasoning. Because they knew. Because they’d learned the how and the why.
Exercise 1: Name five things you can see
Touch
When the machine came, some watched from the backdoor steps, chuckling as two young box carriers carted it over the cobbled lane.
A few bakers busied themselves with deft hands, kneading and tutting. Others tore into the box, losing interest in the detached handles and tubes inside. Never mind; the tinkering types had already washed their hands, ready for the time to assemble the contraption.
My development experience took two paths; IDE tools made me more efficient, especially once the gentle predictive power of ReSharper arrived.
Around the same time, I got emotionally attached to Umbraco as a content management system, version 4. Some of us in the community have been reminiscing about this lately. I remember the friendly little orangey screen as my colleague told me about this cool open-source CMS built on Microsoft .NET Framework. I didn’t fully grasp open source, having grown up in the world of copyright, Napster lawsuits, and terrifying piracy warnings ☠️ No wonder I’m so conflicted about AI-generated content (and music generation app Suno is still going through the mill on this copyright complaint).
As we built websites using Umbraco, my colleagues and I developed an affinity for crafting content within a CMS. Assigning an icon and colour, enabling child permissions on doc-types, moulding the back office to fit the users’ needs. All within our boundaries of sensible choices and careful, steady-paced thought.
People talked about self-driving cars a lot back then. About machines and the Singularity. I don’t remember any of my peers saying that the machines would write the code for us, though. Was it not amazing enough to consider? Or did we presume it was too far away, too hard?
Exercise 2: Name four things you can feel
Bach
Of course, he would first test it with his family recipe for Welsh cakes.
He followed the instructions to the letter:
Ensuring no cross-contamination of funnels, pour in the exact ingredient amounts, and speak the name of the recipe
“Pice bach”
The owner’s baritone voice wavered. Would it understand Welsh? But the machine had already started whirring. Flour, butter, sugar, raisins, spice, and eggs slid down the five chrome funnels as the machine hummed and gauges spiked. After a minute, or no more than two, the beige conveyor belt started rolling, and out plopped four shallow cakes, each landing on a small, square white plate. No one spoke.
The owner picked one up, angled it at the sunlit windows, and took a bite. He slowly shook his head.
“Hiraeth.”
Datasets aren’t lossless. My first machine-learning exercise was on the Titanic dataset: predict who survived based on the right data points: room location, number of children. Frozen terror calmly processed through a training model.
Even so, the mechanics drew me in. Coding has always enticed me because it feels like a logic puzzle and an open-world computer game at once. The number of solutions you can create to fix a problem is exponential; it’s an infinite builder’s sandbox.
As early as the 1950s, psychology showed this irregular pattern was the most gripping of all. You may have heard of Ferster and Skinner’s "Schedules of Reinforcement" study. Rats were given the lure of food in a maze in randomised sequences, and they persisted far longer when they didn’t always get a treat. When the same action doesn’t pay off, it is a textbook case of intermittent reinforcement [Skinner, 1953]. That unpredictability makes systems most addictive. It’s the same pattern behind slot machines, doomscrolling, and AI coding tools.
You eventually must stop, though, right? You’d normally run out of ideas or exhaust the goto patterns you’ve implemented before, maybe researching a little extra or asking a domain expert. The difference now is that you don’t have to stop. You could keep on asking, tweaking this infinitely patient, obedient generator.
Stochastic decoding + under-specified prompts = non-determinism
AI outputs change from run to run because they’re partly random, and because our prompts don’t fully pin down what we want. That’s why you’ll never get identical results twice.
Brilliance sometimes, nonsense often, enough to keep us hooked. Sometimes they strike gold, sometimes an embarrassment of fool’s gold.
Exercise 3: Name three things you can hear
Crumbs
Cornish saffron cakes. Light pink angelic meringues. Chocolate brownies, so dense and famed that the queues went from the high street to the fish market and back again.
The bakers stood with eyes gleaming wide as they pressed their faces to the glittering red metal, whispering to the machines. When it was their turn, they shovelled all they could scoop into the chutes, experimenting in awe with pastries they’d never even dreamed of.
Only two bakers stayed on the remaining counter, hand-crafting sugar birds and roses for the wedding cakes and painstakingly rolling out filo pastry. The others hadn’t picked up a rolling pin for weeks.
When the bakery eventually ran out of ingredients, the machine refused to crank out more than five plain white loaves before it came to a stop for the day, until the delivery vans arrived at dawn.
Customers were returning, but not always for the reasons they had in the past. When the mother of an allergic child found a pine kernel in their plain sponge, the machine builders descended with their clipboards. “If you upgrade the machine, our newer models can taste-test, too. They work in tandem, tripling your output!”
If you take a moment, did you ever really imagine ten years ago that we’d be able to code and deploy entirely from a phone by speaking to it in natural language? (If you’re a futurist, then you can’t play, but well done!) But did you ever think you’d be making these kinds of choices, hands off?
Now, I haven’t seen your code or know how you’re using AI within your tenancy, but you wouldn’t be the first or the last if it were, well, riskier than you’d historically permitted.
In our AI era, we can’t afford to forget traditional security standards. OWASP is already tracking emerging risks in its Top 10 for Large Language Models [OWASP, 2023], including prompt injection, model theft, insecure plugin design, and overreliance. These risks affect security, but they threaten security, blur responsibility, and strain trust.
As Pedro Taveres points out in his article, Writing Code Was Never the Bottleneck. Not always, not even usually. Yes, LLMs might speed up code output. Yet it’s the supporting structures that slow delivery: testing, understanding, communication rituals… you know how it goes.
As throughput increases, how can peer reviewers keep up with the sheer volume? Or the QA teams? It’s one thing to say, “automate it,” but even for the most sophisticated builds, is it possible to remove the human pause button during the seductive frenzy of hyperproduction? And every output varies each time. It’s not cookie-cutter; that’s just how LLMs work. So how can we successfully peer review if they create such different solutions each time? Where’s the consistency during a day or an hour’s work?
Additionally, it’s challenging enough to read someone else’s codebase at times. Giant AI-generated files simply won’t suffice for pull requests and peer review.
This is where clear prompts, not feverish whispers, are vital. Write high-quality, peer-approved recipes; don’t just try to summon a magic unicorn (even an Umbraco one). We’re not creating avatars here; we’re building high-quality, maintainable and testable software in a professional remit (not including the bizarre experiments I’m vibe coding that no one asked for or ever wants to see again).
Consider developing human-in-the-loop guidelines for the use of AI in coding, such as requiring AI-assisted pull requests to include the prompts used along with the model. Perhaps a round-the-group daily round of “How did AI do today?” after a scrum. Follow the usual developer standards; keep it short, keep it simple and keep it inspectable.
A set of operational guardrails can help teams to understand when to use AI versus when a hands-on approach is appropriate:
- Good fit: scaffolding doc-types, writing repository boilerplate, generating backoffice UI copy, converting small LINQ queries to SQL for analysis
- Use with caution: cross-cutting concerns in composition, cache invalidation rules, migrations that touch more than one environment
- Never outsource: security-critical code, data retention logic, and anything that changes “editor mental models” in the back office (how editors expect publishing to work)
A study on automation bias concluded that even experienced pilots tend to trust the autopilot more than they should [Mosier & Skitka, 1998]. Mid-air, someone still needs to know how to fly the thing.
And why not use the AI to check itself? Pre-commit scanners can treat model output as tainted data; when I’m using OpenAI Codex, for example, it often gives strong suggestions about improvements to its own code. A code smell by a tool that can’t.
Exercise 4: Name two things you can smell
An Ending
As the smell of electrical smoke cleared, with hundreds of orders due that morning, the bakers tried to weave pretzels by hand, but the knots looked childlike, and their hands were sweaty.
It all used to come so naturally; didn’t it? The owner quietly cursed as he sent his customers to other bakeries that each had their own working machines. Why travel to him anymore? The other bakeries were all feverishly summoning the same things.
In the still of the machine, as the morning waned, the bakers began to look at one another. Their faces softened; they blinked again. Then one young baker jolted, blurting out “Frank!” He had always come every morning since his wife had passed. No one had noticed when he didn’t come for a week, or the week after that, or ever again.
A flash of furrowed brow crossed the owner’s face and he looked down, then strode across to the counter. He picked up his folded apron and tied the strings into two bunny ears and around. Poised like a pianist, he started to knead, sprinkling on gingery spices. He pinched a piece of dough and held it up to his mouth, then held it higher and turned back to his three bakers, his voice thick and rising.
“Name one thing you can taste!”
Sources
- Skinner, B. F. (1953). Science and Human Behavior. Macmillan.
- OWASP Foundation (2023). OWASP Top 10 for Large Language Model Applications. Retrieved from https://owasp.org/www-project-top-10-for-large-language-model-applications
- Mosier, K. L., Skitka, L. J., Heers, S., & Burdick, M. (1998). Automation bias: Decision making and performance in high-tech cockpits. International Journal of Aviation Psychology, 8(1), 47–63. https://doi.org/10.1207/s15327108ijap0801_3