Who Becomes the Compiler?
If anyone can build in English, the scarce thing is permission to act.
I thought this post was about English becoming the new programming language.
That is true, but it is not the interesting part anymore.
The interesting part is what happens after the code is generated. What happens when the software wants to touch a bank account, initiate a payment, switch an electricity provider, or make a decision with consequences outside the screen.
Anyone can now generate plausible software in plain language. You describe a workflow, a form appears, an API gets called, something runs. The bottleneck that used to sit between domain knowledge and code has clearly weakened.
But plausible software is not the same thing as trustworthy software.
That is the shift I think people are still missing.
English has no compiler
I still think the phrase "English is the new code" is directionally right.
If you understand a domain deeply, you have a much shorter path from knowing to building than you did even two years ago. That matters. The democratisation of code does not make domain expertise less valuable. It makes it executable.
But English has no compiler.
It has no type checker. No linter. No test suite. No senior engineer leaning over your shoulder saying "this mostly works, but it will fail the first time a real customer does something ugly".
That is why so much AI-generated software feels impressive right up until the moment you look closely.
At Akahu, the open finance infrastructure company I founded, we have watched the volume of applications rise as building gets easier. Some of them are excellent. Some are clearly built by people who know their domain cold and have used AI to close the translation gap.
And some are exactly what you would expect from this moment in the tooling cycle. The app looks fine. The demo works. The API connection is there. Then our review team inspects the implementation and finds insecure handling of banking API tokens, weak error boundaries, missing controls, or obvious assumptions about how the real world behaves.
The builders are not stupid. They are using the tools exactly as advertised. Describe what you want. Get working software back.
The problem is that "working" is doing far too much work in that sentence.
I wrote recently in Is Your Software Scaffolding? that the durable layer in software shifts toward context, policy, audit, and permissioning as agents do more of the work. This is that argument in more concrete form. Once code gets cheaper, the missing layer is not more code generation. It is trust.
New Zealand already has a model
This is why the most interesting part of New Zealand's Consumer Data Right is not just that open banking went live on 1 December 2025.
It is the shape of the trust layer sitting around it.
The Customer and Product Data Act 2025 created a framework where customers can authorise accredited requestors to access certain data and initiate certain actions. MBIE accredits participants. There are technical standards. There is a public register of participants. Customers give explicit authorisation. They can withdraw it. The system is designed so that access to sensitive systems is not just "here is an API key, good luck".
We tend to talk about AI-generated software as if the whole story is in the model. Better reasoning. Better code generation. Better agents.
That is only half the story.
The other half is institutional. Who is allowed to participate. Under what standards. With what accountability. With what audit trail. What happens when they fail. Who a customer complains to when something goes wrong.
In other words, who becomes the compiler.
There is an obvious bias here. Akahu sits right inside this layer. That is also why the pattern is visible from where we stand.
Akahu now appears on MBIE's public register as an accredited intermediary for both customer data and payments under the banking designation. That is not a metaphor for where the world might go. It is a live example of a trust layer already operating in New Zealand.
The intermediary is the interesting bit
The clever part of the framework is not just accreditation in the abstract. It is the intermediary model.
An intermediary is basically a trusted operator sitting between sensitive systems and a long tail of smaller builders. Instead of every tiny app earning direct trust from scratch, some of that trust, due diligence, and operational responsibility can sit with the intermediary.
If every tiny builder had to independently earn direct trust from every bank and regulator, the system would choke. It would be too slow, too expensive, and too hard for the long tail of useful software to exist at all.
If every tiny builder got raw access with no meaningful gate in front of them, that would be reckless.
The intermediary model is the compromise.
MBIE's accreditation guidance makes clear that intermediaries take on extra obligations around due diligence, monitoring, contracts, complaints processes, liability cover, and the handling of fourth parties operating through them. That is the real mechanism here. The trust does not sit only in the app. It sits in the structure around the app.
That feels like a glimpse of the future, at least in high-trust domains.
It is also worth preserving the tension here. Any trust layer can become a gatekeeper. It can slow things down. It can protect incumbents if it is designed badly. The point is not that more gates are automatically good. The point is that high-consequence systems already need a trust model, and "the code mostly works" is not a serious one.
Not because government will accredit every piece of software. That would be insane. Most software should remain cheap, messy, and easy to ship. The world does not need a regulator for a to-do app or an internal dashboard.
But as soon as software can move money, change a bill, touch infrastructure, or materially affect a real person, someone starts demanding a compiler of some kind. In banking that may be the regulator. Somewhere else it may be the platform, the insurer, the procurement gate, or the enterprise buyer. Not a code compiler. A trust compiler.
What this probably looks like outside regulated sectors
I do not think non-regulated industries will copy the legal form of CDR exactly.
Over the next few years, I do think they will copy the shape.
In highly regulated sectors, the trust layer will often be statutory. Banking is already there. Electricity looks likely to be next, with MBIE saying work is underway and open electricity is targeted from mid-2027.
Outside those sectors, I expect more private versions of the same thing.
Platform approval programmes. Industry certifications. Procurement gates. Insurer-driven requirements. Public trust marks. Vertical intermediaries that take responsibility for a long tail of smaller builders. Some combination of all of the above.
The mechanism is not that every industry suddenly becomes regulated. It is that buyers of high-consequence software stop accepting "the demo worked" as a serious trust model. They start demanding contractual accountability, certification, insurance, review, and a clear chain of responsibility.
That is not only a constraint on small builders. Done well, it is also an enabler. A good trust layer gives a small team a credible path into serious systems without forcing every new entrant to independently prove everything from scratch.
The details will vary, but the pattern is the same. As software generation gets easier, the right to act gets more valuable.
That is the part a lot of the current AI conversation still misses. People talk as if code generation itself is the whole moat or the whole threat. It is neither.
The value shifts toward whoever can make generated systems safe enough, reviewable enough, and accountable enough to operate in the real world.
In The Best Time to Build on Bank Data, I argued that open banking finally made a whole class of products buildable in New Zealand. I still think that is true. This is the other half of the same story. The opportunity is not just in access to the rails. It is in the trust architecture around them.
This might be a real role for NZ Inc
New Zealand is not going to lead the world in foundation models. That is not our lane.
But we could plausibly lead in something more realistic and, for a country like ours, probably more useful.
In New Zealand's three-year window, I argued that small countries can win a shift like this by building real structural advantages rather than pretending they can match the biggest players at their own game. This feels like one concrete version of that.
We could get very good at the boring layer everyone else underestimates. Standards. Accreditation. Consent. Revocation. Interoperability. Audit. Intermediaries. Complaints and accountability. The actual machinery that lets software and agents interact with sensitive systems safely.
That may sound unglamorous. It is also exactly the kind of thing small countries can win at.
And some of the pieces are already appearing. CDR starts answering who can access data and initiate actions under what standards. The Department of Internal Affairs' Digital Identity Services Trust Framework starts answering which identity services can be trusted to prove who is who, under what rules, and with what accountability. The Govt.nz app hints at the consumer-facing layer that could eventually sit on top.
That matters for builders as much as policymakers. Good trust infrastructure removes generic plumbing from the critical path. If reusable identity, consent, messaging, credentials, wallet, and payment layers become reliable building blocks, departments should need less time to rebuild the same scaffolding before they can deliver a trusted service. The same logic applies to small private builders. The less time you spend rebuilding trust scaffolding, the more time you spend solving the actual problem.
We are small enough to coordinate. Small enough to get banks, fintechs, regulators, utilities, and standards bodies in the same room. Small enough that if we get the operating model right in banking and then electricity, we might end up with something genuinely exportable, not as software, but as an institutional pattern.
That could mean exportable standards playbooks, accreditation models, intermediary operating patterns, compliance tooling, and credibility as a testbed for how AI-era services should interact with high-trust systems.
I do not know how far this generalises yet. It may turn out that some industries can rely mostly on private trust layers and others need formal accreditation. It may turn out that the intermediary model works beautifully in banking and badly somewhere else. It may turn out that the best compiler in some domains is the platform, not the regulator.
But I am fairly confident about one thing.
If anyone can build in English, the scarce thing is no longer the code.
It is the permission to act.
That means the compiler stops being only technical. It becomes institutional.
And if that is true, New Zealand may already be building a small part of the future without fully realising it.
I'm Ben Lynch. I write about founders, AI, and what happens next from New Zealand. Say hello at ben@thinkdorepeat.ai.
If you're new here, Start Here is the best place to begin.
If you know someone building in a high-trust industry, send this to them.

