rand[om]

rand[om]

med ∩ ml

Software Willy Wonka

I call my LLM agents “Oompa-Loompas”. It started as a fun way to refer to them, especially when talking to non-technical people. They normally just know “ChatGPT”, but I use Claude Code, Codex, and a bunch of others. At some point it felt weird to keep saying “ChatGPT” when that’s not even what I’m using. So I needed a term that covered all of them. Oompa-Loompas.

Now I realize the metaphor goes deeper than I thought.

Programming is not going away

There’s this recurring take: “programming is going to disappear”. The response, which I think is correct, goes something like: “we’ll still need someone who knows how to tell the computer exactly what to create and how it should look”. Which, you know, is also the definition of programming.

People get lost thinking programming equals programming languages. But languages have always been evolving towards higher levels of abstraction. Most people are not writing assembly by hand anymore. We write at higher levels, and that comes with trade-offs, but it’s how things are. LLMs are just the next level.

This new level of abstraction doesn’t change the essence of programming. What changes is how some of the work gets done.

The compiler parallel

Think about what compilers did. Before good optimizing compilers, you had to think about loop unrolling, function inlining, register allocation. You spent brain cycles on things the machine could eventually figure out better than you.

Compilers didn’t remove the need to understand performance. You can still write assembly when you need to. But you’re no longer obligated to think about those low-level optimizations for every piece of code you write.

LLMs are doing the same thing, one level up. You still need to know how a computer works. You still need to understand what your code is doing. But you’re no longer obligated to spend brain cycles on the mechanical act of writing every line. You can redirect that energy elsewhere.

And just like compilers don’t prevent you from writing assembly, LLMs don’t prevent you from writing Python by hand. You still can. You just don’t have to if you don’t need to.

This isn’t about thinking less. It’s about redistributing where you spend your thinking tokens. The obligation shifts from “write this code” to “think clearly about what this code should do”.

The spectrum of programming work

Programming has always covered a wide spectrum. On one end: people who come up with new ideas, data structures, algorithms. More on the research side. Then there are people who can do both: have good ideas and execute them well. Good architects who also ship production-ready code.

On the other end of the spectrum, there’s work that doesn’t require as much high-level thinking. Once the approach is defined, the task is mostly implementation. You get a clear specification, you write the code. Not much architectural decision-making involved. We’ve all done this kind of work, especially when learning. In any organization, there’s always a portion of work that falls into this category: clear requirements, straightforward implementation.

I think this latter category is what gets automated away first. LLMs are particularly good at tasks where the specification is clear and the main job is translating that into working code.

The throughput has increased

If you were already good at both thinking and execution, your effective speed just went up. The energy you used to spend on implementation doesn’t disappear, but a significant chunk of it can now be redirected to the thinking bucket. The architecture bucket. The creative bucket.

There’s also more room for exploration. When implementation is cheaper, you can try more approaches. Prototype ideas you wouldn’t have bothered building before. Test multiple solutions before committing to one. The cost of experimentation has dropped.

The people who will thrive are the ones who can do both: execute quickly (now with help from LLM agents) and also think at a high level. Clear thinking. First principles. Not overcomplicating things.

This is something that’s a bit overlooked. It’s not just about being extremely smart about data structures or system design. Sometimes clarity of mind matters more. Thinking from first principles, which is still rare, becomes even more valuable.

What matters less, what matters more

Knowing the Python standard library by heart is becoming less valuable. I say this as someone who started programming with Vim even when VS Code was already around. I used to think typing speed mattered a lot. That knowing methods and APIs by heart was important. And in a way, it still is: the speed at which you can convert your thoughts into tangible things (programs, scripts, demos) remains extremely important.

But the mechanism for achieving that speed has changed. Back then, high throughput meant: fast typing, minimal latency between your brain and the code on screen. Knowing your editor, your language, your tools inside out.

Now? LLM agents are the next level of this. The bandwidth between your brain and what you see on screen has increased dramatically. Your Oompa-Loompas can handle a lot of the typing and lookup work.

So invest that freed-up energy wisely. Into clear thinking. Architecture. Creativity. First principles reasoning.

If the Oompa-Loompas do all the work, you better be a good Willy Wonka.