Posts

2026.MAY.07

3 Constraints Before I Build Anything

Jordan Lord:

These are the 3 constraints that I use before I start building anything. I'm a believer in constraints as an enabler for creativity. Constraints help us collapse the search space, and figure out innovative solutions to problems.

I've been a builder for 10 years, and I've built products that went nowhere because they were either too complex or had no identity. These are the constraints that I landed on after making those mistakes.

I tend to agree with the author — constraints breed creativity and all that jazz.

I find the second constraint fascinating in particular: “The core tech must be separable from the product.” I’ve been thinking about this in the context of product ideas I’m constantly exploring.

For example, Pi, my favourite coding agent and daily driver has this: the pi-ai package which is similar to Vercel’s AI SDK (abstract access to various models & providers) and pi-agent-core (built on top of pi-ai and provides common constructs to build an agent). OpenClaw is built on the latter, and that played a big role in making Pi popular.

2026.MAY.06

The layoffs will continue till we learn to use AI

Arnav Gupta:

But the truth is that these layoffs, even if they they are not because AI is replacing you, and even if they are some form of AI-washing. These layoffs are still because of AI. And these layoffs will continue till we learn to use AI. Till we learn to convert AI-tokens into outcomes and not just input. Till we learn to re-align the speed of "alignment" with the new speed of coding. And till we figure out, beyond our 2 good and 8 stupid ideas, 10 more ideas that we can chase with our increased productivity.

This is a very refreshing take on the layoffs in large tech companies. It’s the best take I’ve read on this.

2026.APR.29

The Anatomy of an Agent Harness

Aparna Dhinakaran:

Someone asked me at a hacker event last week: "Can anyone actually tell me what a harness really is?" It was said with real skepticism. The kind of skepticism that says we all use the word "Harness" in the industry, but nobody actually knows what it is.

Fair question. Let me try.

This is a good post and does the important job of defining a term that is getting used increasingly in the context of AI agents.

Perhaps a good addendum would be to define an agent as something that wraps the harness into an app that users interact with. Claude Code is a harness and a coding agent merged into one. Codex cli is a coding agent that builds on the codex-app-server harness. Cursor is also a harness + coding agent, but they are also experimenting with Claude Code as a harness!

T3Code is a coding agent that demonstrates this difference best: it does not ship with its own harness and can instead use Codex, Claude Code, or OpenCode as harnesses.

One exception I take with the linked post is that not every component that it describes as making up a harness is actually necessary in every harness.

As an obvious example, you could very well build an agent without subagents (if you do want subagents, it would have to come in at the harness level as subagents are exposed as tools to the LLM).

So what are the absolute minimal components in a harness? I think it's just the agentic loop. That includes assembling the system prompt, tool definitions, executing the tool calls & assembling the results, etc.

Context management and compaction is not required (it could live outside the harness). Skills are not required. We already talked about subagents. Built-in prepackaged skills should probably not be there in any harness. Lifecycle hooks are nice to have. Session persistence & recovery is optional. So is a permission & safety layer.

2026.APR.29

The West Forgot How to Build. Now It's Forgetting Code

Denis Stetskov:

Five to ten years from now, we’ll need senior engineers. People who understand systems end to end, who can debug distributed failures at 2 AM, who carry institutional knowledge that exists nowhere in the codebase. Those engineers don’t exist yet because we’re not creating them. The juniors who should be learning right now are either not being hired or developing what a DoD-funded workforce study calls “AI-mediated competence.” They can prompt an AI. They can’t tell you what the AI got wrong.

Ignore the click-baity title. This is a well-written and well-argued post on how the software industry might be hurtling towards a grim future, the kind of present that the West’s defence industry has found itself in as an unexpected war broke out between Ukraine & Russia.

2026.APR.29

The Basics

Thorsten Ball:

Here's what I consider to be the basics. I call them that not because they're easy, but because they're fundamental. The foundation on which your advanced skills and expertise rest. Multipliers and nullifiers, makers and breakers of everything you do.

They don't usually show up in technical books and yet without them a lot of brilliant effort can go to waste. I constantly have to remind myself of them, sitting on my own shoulder and wagging a finger in my face.

What a great set of obvious but seldom articulated things that every developer would do well to go through at some regular interval (because these are easy to forget, especially in this age of agentic engineering).

This is an old post that I discovered through last Sunday's Joy & Curiosity, Thorsten Ball's weekly round up of really wonderful links (a lot of them focused on agentic engineering, given Thorsten is building Amp, which I've heard described as the Porsche of coding agents).

2026.APR.26

You and Your Research

I had come across this old lecture from Richard Hamming before but never watched it. But I had multiple people recommending it on Xitter in just the last day, including Paul Graham. Thorsten Ball (of Amp Code) recommended it in his excellent Joy & Curiosity newsletter today, and I had to watch it.

What an excellent talk covering such a wide array of topics, but all towards an exhortation for how to be great. I watched the entire thing at 1x speed. Yes, it's that good!

2026.APR.26

Tim Cook Personified Big Tech's Maturity

Andrew Sharp:

And that's ultimately Cook's legacy, to me. He made sensible choices under the circumstances, nurturing Apple profits and its stock price at every turn. If many of those choices were ultimately predictable and unfulfilling, well, that's the game for a company at Apple's stage of the corporate life cycle.

Where Apple under Jobs was selling performance and possibility, Apple today capitalizes on our collective dependence on the iPhone ecosystem and promises superior reliability to any peers. And that's still a pretty good deal! But it's a categorically different value proposition than that of the company that was changing the way an entire generation interacted with technology.

This is the best commentary I've read in light of the announcement of Cook's retirement. Most of the other coverage has been way too positive, this is much more balanced and closer to how I feel.

2026.APR.25

The Zechner-Lopopolo Continuum

Alex Volkov:

The Zechner-Lopopolo Continuum

This is a recap of the AI Engineer Europe conference that took place in London a couple of weeks ago. But the more interesting thing is the debate that the title and above image points to.

Mario Zechner (creator of the Pi coding agent, my preferred coding agent) talked about

  • why & how he built Pi (this summarises why I'm in love with Pi)
  • the complexities brought about on OSS by people wielding agents and how he is tackling these with innovative solutions like OSS Vacations/Weekends
  • (critically) advocating for reading critical code thoroughly and generally slowing down to ensure we don't drown in AI slop code

Ryan Lopopolo (from OpenAI) talked about some vague things like code being a liability and how he is a "token billionaire"; and how he has mandated his team to not look at the code. Maybe he talked about more things, I just couldn't sit through the entire talk.

If it's not obvious, I'm firmly at the Zechner end of the continuum.

Maybe this will change in a couple of years or even in just a few months, but in April 2026, anyone who is too far out on the Lopopolo end is taking on a lot of technical debt that they may not really be able to pay off.

And no: no amount of tests or specs is going to prevent that technical debt from building up, because the debt is not about correctness. The things that lead to this debt from agents are the same things that lead to debt buildup from humans: poor design choices, code duplication, needlessly defensive code, and many other such sins that agents can add at a pace hitherto unimaginable for humans.

The only way to prevent or tame this is for humans to read the code. Or break the problem down into small enough chunks so that agents actually follow the "don't duplicate code" and other testaments from our AGENTS.mds. Or in the words "human in the loop."

"But that will slow us down," I can hear some people say. Yes, slow the fuck down1.

Footnotes

  1. We'll still be way faster than we were a year ago, so don't despair.

2026.APR.25

Why Isn't Everything Different Yet?

Dave Griffith:

So: where are we? The technology exists and is impressive. The infrastructure buildout is underway and massive. Workflows are being redesigned in early-adopter organizations, often via guesswork. We've got one (1) product area (software development agents) where we're past "early adopter" and moving onto mass-market. Legal frameworks are being written badly by people who have never used the technology, which is traditional. Business models are being discovered by trial and error, also traditional. Fortunes are being made and lost, another time-honored tradition.

The critics who say nothing has changed are measuring at the wrong resolution. The critics who say change should have been instantaneous have a broken model of how change works. The honest answer is: this is going extremely fast, it will often feel slow until suddenly it doesn't, and the people who have built understanding now will not be scrambling in three years.

Amen. Good, entertaining read.

I'm going to refer people to this when they say either that things will not change dramatically or when they say that the dramatic change has already happened (so much more to come).

ai
2026.APR.25

Coding Models Are Doing Too Much

nrehiew:

If you have used any of these tools in the past year, you have probably experienced something like this: you ask the model to fix a simple bug (perhaps a single off-by-one error, or maybe a wrong operator). The model fixes the bug but half the function has been rewritten. An extra helper function has appeared. A perfectly reasonable variable name has been renamed. New input validation has been added. And the diff is enormous.

I refer to this as the Over-Editing problem where models have the tendency to rewrite code that didn't need rewriting.

Yes! A thousand times, yes.

GPT models are especially prone to this overediting problem. A part of this comes from writing code that is way too defensive1, but it's not just that — they are really eager to "fix" your code even when there is really no need for that.

Thankfully, GPT models are also very good at following instructions. So I have had instructions to circumvent this problem in my global AGENTS.md for a while and it helps quite a bit.

This is what the linked post also found: the over-editing reduces across models when they are prompted for it.

This is a good post. It's not an opinion piece, but takes a scientific approach by setting up experiments and providing evidence in the form of results.

Footnotes

  1. I've seen a couple comments saying that GPT-5.5 has gotten better in this regard and doesn't write such defensive code anymore. I'm yet to ascertain this.