<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Think . Do . Repeat]]></title><description><![CDATA[I‘m Ben Lynch. I think about founders, AI, and what happens next from New Zealand. Say hello at ben@thinkdorepeat.ai]]></description><link>https://thinkdorepeat.ai</link><generator>Substack</generator><lastBuildDate>Sun, 05 Apr 2026 17:58:37 GMT</lastBuildDate><atom:link href="https://thinkdorepeat.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Ben Lynch]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[thinkdorepeatai@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[thinkdorepeatai@substack.com]]></itunes:email><itunes:name><![CDATA[Ben Lynch]]></itunes:name></itunes:owner><itunes:author><![CDATA[Ben Lynch]]></itunes:author><googleplay:owner><![CDATA[thinkdorepeatai@substack.com]]></googleplay:owner><googleplay:email><![CDATA[thinkdorepeatai@substack.com]]></googleplay:email><googleplay:author><![CDATA[Ben Lynch]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Is Your Software Scaffolding?]]></title><description><![CDATA[Most SaaS products exist because humans needed an interface. Agents don't.]]></description><link>https://thinkdorepeat.ai/p/is-your-software-scaffolding</link><guid isPermaLink="false">https://thinkdorepeat.ai/p/is-your-software-scaffolding</guid><dc:creator><![CDATA[Ben Lynch]]></dc:creator><pubDate>Sun, 29 Mar 2026 18:33:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bUmv!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75e4af6d-10f9-4957-84d5-d7b18c5dcf8a_652x652.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Much of modern software exists for the same reason: humans are slow at processing structured information, so we build interfaces to help them do it faster.</p><p>Dashboards. Forms. Menus. Workflows. Per-seat licences. A huge chunk of SaaS is built on a single assumption: that humans operate software to get work done.</p><p>That assumption starts to break over the next few years.</p><h2>The question nobody's asking</h2><p>Across a lot of SaaS right now, the conversation seems to be "how do we add AI features?" New copilot. Smarter search. Auto-generated summaries in the sidebar.</p><p>Wrong question.</p><p>The right question is: does your product exist because humans needed an interface to do work that agents will do directly?</p><p>If your roadmap is mostly UI polish and AI features, you may be investing in the layer that gets commoditised first.</p><p>Because if it does, you're not selling the building. You're selling scaffolding. And scaffolding comes down when the building is finished.</p><p>I can see this from inside Dolla, the accounts payable company I'm building. I started with a very simple thesis: in a lot of operational software, most of the work happens before software. It starts in messy inboxes, PDFs, attachments, email threads, and half-stated instructions. Humans have been the translation layer between that unstructured mess and systems like Xero.</p><p>So the first useful thing I built was not a dashboard. It was an invisible workflow: ingest the email, interpret the context, extract the invoice, code it correctly, and push the result into Xero. No app. No portal. Just email in, structured action out.</p><p>And building that changed my view of software. Once the interface disappears, you start seeing how much of the product was really just a bridge for a human operator. In plenty of products, the dashboard is not the product. It's the temporary bridge that existed because a human had to drive.</p><p>Think about what a lot of B2B SaaS is still organised around. A CRM is a structured database that humans manually update so other humans can manually query it. Project management tools are lists that humans manually reorder so other humans can manually check. In accounting, a lot of the work is still humans translating messy real-world inputs into a ledger that other humans review.</p><p>In many categories, those products are bridges between a human and a system of record. The human needed the bridge because they couldn't reliably interpret the upstream mess, apply the right context, and then act on the system underneath.</p><p>Over the medium term, agents will increasingly be able to do exactly that through APIs, policies, and system-level interfaces that don't require a human sitting in front of a screen.</p><h2>Per-seat pricing is the canary</h2><p>Here's one of the earliest visible symptoms of the structural problem.</p><p>Most SaaS companies price per seat. Salesforce, Atlassian, Slack, virtually everyone. The model works because headcount and software usage scale together. More employees, more seats, more revenue.</p><p>But what happens when one person with a team of agents does the work that used to require ten people?</p><p>The company still needs the same outcomes. Maybe more. But they need one seat, not ten. And the agent doing the work doesn't need a dashboard. It needs an API.</p><p>Per-seat pricing assumed that humans were the unit of work. In more and more categories, that assumption weakens over time. And companies priced around it are likely to feel that pressure early.</p><p>Some already have. The moment the same outcome can be delivered with fewer humans touching the software, seat expansion stops being a reliable growth engine.</p><h2>What's actually dying</h2><p>I want to be precise about this, because "SaaS is dead" is a lazy take and it's wrong.</p><p>SaaS isn't dying. But in a lot of categories, the interface layer is becoming dramatically less valuable.</p><p>The database underneath your CRM still has value. The integrations your accounting platform maintains still have value. The compliance engine in your HR software still has value.</p><p>What gets commoditised is the dashboard sitting on top of all of it. The menus, the clicks, the forms, the carefully designed user experience that helped humans navigate complexity. That layer was built for a user that increasingly won't be the one doing the work.</p><p>That doesn't mean every UI vanishes overnight. For a long time, many products will still have one. But it starts looking less like the place work gets done and more like an operator console for exceptions, approvals, and audit. The human stays in the loop, just in a different place.</p><p>The scaffolding was never the building. It just looked like it was, because for twenty years, the scaffolding was all anyone could see.</p><h2>The memory problem nobody's talking about</h2><p>Here's where it gets interesting, and where my own thinking has moved on since the last post.</p><p>In <a href="https://thinkdorepeat.ai/p/i-dont-consume-the-internet-raw-anymore">I Don't Consume the Internet Raw Anymore</a>, I argued that context is becoming the scarce layer for the individual. I think that understates the shift. Over a longer horizon, memory becomes infrastructure at the company level.</p><p>In a normal SaaS company, organisational memory is scattered everywhere. It's in people's heads. In Slack threads from six months ago. In ticket comments nobody reads. In CRM notes that one sales rep wrote before they left. In the tribal knowledge that walks out the door every time someone quits.</p><blockquote><p>Most software doesn't remember anything. It stores records. That's not the same thing.</p></blockquote><p>When a new employee joins, they spend months absorbing context that exists nowhere in the system. When a customer calls with a problem, the support agent searches through fragmented tools to reconstruct what happened. When a decision needs to be made, the relevant history is scattered across twelve different applications.</p><p>This is the part SaaS founders should be paying attention to. Not AI features, not chatbot integrations, not copilots in the sidebar. The fact that the next generation of software treats memory as infrastructure.</p><p>What I mean by that: software that observes what happens, reviews patterns, and promotes stable, recurring, decision-relevant knowledge into durable memory that agents can consume. Not random notes in a database. Structured understanding that changes how the system behaves next time.</p><p>A record says Supplier X sent an invoice on Tuesday. Memory says this supplier is usually coded to maintenance for this property, except when the line item includes pool chemicals, and confidence should drop if the venue manager overrode the last two suggestions. A record stores history. Memory changes behaviour.</p><p>Customer-specific memory that overrides generic rules. Confidence levels that adjust based on history. Context that improves the next decision.</p><p>But this doesn't stop at the product. The same principle applies to the entire company.</p><p>Think about what a business actually is. It's a set of decisions made repeatedly, under varying conditions, with imperfect information. Hiring, pricing, prioritisation, customer communication, operations. In most companies, the knowledge behind those decisions lives in people's heads. When someone leaves, the company gets dumber. When someone new joins, the company spends months getting back to where it was. The institutional memory is fragile because it was never really institutional. It was personal.</p><p>A company built on memory as infrastructure doesn't work like that. Every decision, every correction, every override teaches the system something. A customer complaint doesn't just get resolved. It updates the pattern for how similar complaints should be handled next time. A pricing change that underperforms doesn't just get reversed. The reasoning and the outcome get captured so the same mistake doesn't repeat. An edge case that required human judgment today can become a rule, or at least a lower-confidence branch, the system applies next time.</p><p>The whole company becomes a learning loop. Not just the product learning its customers, but the business learning itself. What's working, what's failing, where confidence is high, where it should slow down and ask. Over time, that compounds into something that's genuinely hard to replicate, because the learning is specific to your customers, your market, and your operating history. A competitor can copy your features. They can't copy what your system has learned from years of making and correcting decisions in the real world.</p><p>That's a fundamentally different architecture from "store a record and let a human interpret it." And it's the moat that will matter. Not your UI. Not your feature count. Whether your system actually learns.</p><h2>What I'm seeing from the inside</h2><p>I mentioned Dolla earlier. Once that invisible workflow existed and the interface disappeared, the next bottleneck became obvious. It wasn't OCR. It wasn't extracting invoice fields. That part is close to solved now. The hard part was operational context. Same supplier, different venue. Same invoice format, different coding treatment. A note in the email body that changes the allocation. A historical override that should reduce confidence this time.</p><p>That's the point where a simple automation layer starts becoming something closer to an operating system. The product still processes invoices. But the thing getting more valuable is the system underneath. It learns each customer's patterns. It builds memory of how a specific organisation codes bills. It tracks which suppliers are stable and which are ambiguous. It knows when confidence should rise and when it should slow down and ask for review.</p><p>In products like this, the memory becomes more valuable than the dashboard.</p><p>It also leaves a trail of reasoning. Not just the answer, but why the answer was chosen. In accounting that matters, because trust comes from explainability as much as accuracy. If a system is going to take action for you, it needs to be reviewable. A customer shouldn't just see that an invoice was coded to maintenance. They should be able to see that the supplier usually lands there for this venue, that the email body included an instruction that changed the allocation, and that confidence dropped because of a recent override. That is what makes automation feel trustworthy rather than magical.</p><p>And when the learned behaviour is wrong, someone still has to own that. In accounting, "the system learned from its mistakes" is not an answer your auditor accepts. The accountability has to be as clear as the reasoning.</p><p>I should be upfront about my bias here. I'm building Dolla this way, so of course I'd argue this is the future. I'm also describing the categories where this pattern seems strongest to me: software that sits between messy real-world inputs and structured systems of record. I expect those categories to change first.</p><p>That's the role I think humans keep. Not clicking around software all day. Setting policy, reviewing exceptions, handling edge cases, and making the judgment calls that shouldn't be automated away.</p><p>That's not "SaaS with AI features bolted on." That's a different thing entirely.</p><h2>The uncomfortable audit</h2><p>If you're running a software company, here's the question worth sitting with:</p><p>If an agent could operate the underlying job through APIs, policy, and audit, would anyone still need your UI?</p><p>For a lot of horizontal SaaS, over time, the honest answer may be no. The UI was the product because humans needed it. Agents don't need a drag-and-drop interface to reorder a task list. They don't need a Kanban board. They don't need a settings page with thirty toggles. They need services to act on, context about what matters, and policy about what they're allowed to do.</p><p>That last part is important. The future isn't "agents with no guardrails." It's agents operating within explicit policy. Actions classified by risk, reversibility, and confidence. Some things get automatic approval. Some need human sign-off. Some are blocked entirely.</p><p>Once software starts acting this way, the valuable layer shifts again. First the interface matters less because the agent can operate the job directly. Then the control layer matters more because someone still has to define what the agent is allowed to do, when it should slow down, and how its actions get reviewed. That governance layer is real software. That's the building, not the scaffolding.</p><blockquote><p>The job is not to keep humans clicking around software. It's to keep human judgment at the boundary where consequences live.</p></blockquote><p>That's not a temporary compromise. It's the architecture.</p><h2>What survives</h2><p>Software doesn't disappear. But the valuable layer shifts.</p><p>The infrastructure survives. Databases, authentication, payments, communications plumbing. Agents need to act in the world, and they need reliable infrastructure to do it. The pipes matter more than ever.</p><p>Customer-specific context survives too. If your software has accumulated years of customer-specific context that makes it smarter over time, that's genuinely hard to replicate. The moat isn't your feature list. It's what your system knows that nobody else's does.</p><p>Deep vertical knowledge survives as well. Generic horizontal platforms get squeezed. But software deeply embedded in a specific industry, with domain knowledge that took years to build, still has a reason to exist. Accounting rules, compliance requirements, industry-specific workflows. Agents need that knowledge. They just don't need the same interface layer that currently delivers it.</p><p>And policy and governance survive. As agents do more, the question of what they're allowed to do becomes critical infrastructure. Permission systems, audit trails, approval workflows for high-stakes actions. This is real, valuable, lasting software.</p><p>You can already see enterprise software moving in this direction. The language varies, but the underlying idea is the same: as agents start acting inside the business, the control layer for visibility, permissioning, and accountability matters more.</p><p>If I had to rank the durable moats, I'd put them in this order: customer-specific context first, trusted permission to act second, distribution into real workflows third, and deep vertical knowledge fourth. Those get stronger as UI and code get cheaper.</p><p>There's a second-order effect worth noting. It's not just that agents will increasingly use software directly. They'll increasingly write it too. First the agent replaces the user of the software. Then it replaces much of the team writing it. At that point, the scarce thing is no longer the interface, or even most of the code. It's the context the code acts inside: the customer-specific data, the accumulated memory, the permissioning, the audit trail, the operational history, the real-world trust the system has earned. The moat moves from the product to the environment it is generated inside.</p><h2>The rebuild opportunity</h2><p>I don't think this is a doom story for software founders. I think it's a rebuild story.</p><p>At the company level, this is really the same argument I made in <a href="https://thinkdorepeat.ai/p/new-zealands-three-year-window">New Zealand's three-year window</a>: the advantage goes to companies built clean from day one, not legacy businesses trying to bolt AI onto architectures designed for a different era.</p><p>The companies that pivot from "software humans operate" to "infrastructure agents consume" will be worth more, not less. The total value of work being done isn't shrinking. If anything, it's growing. It's just that the interface between "thing that needs doing" and "thing that gets done" is collapsing.</p><p>If your software sits in that collapsing middle, you have a choice. You can keep polishing the dashboard and adding AI features to an interface that's becoming irrelevant. Or you can ask what your software actually knows, what data it's accumulated, what context it holds that nobody else has, and rebuild around that. That's not an overnight flip. Most companies will run both layers for years. But which layer you invest in matters now, because the one you starve is the one you won't have when the shift accelerates.</p><p>The SaaS model sold organised interfaces to humans doing manual knowledge work. That was the scaffolding.</p><p>The building is software that accumulates context, governs autonomous action, and compounds understanding over time, while keeping human judgment where it actually matters.</p><p>The scaffolding comes down. The building is just getting started.</p><div><hr></div><p>I'm Ben Lynch. I think about founders, AI, and what happens next from New Zealand. Say hello at ben@thinkdorepeat.ai.</p><p>If you're new here, <a href="https://thinkdorepeat.ai/p/start-here">Start Here</a> is the best place to begin.</p><p>If you know someone whose roadmap is still optimised primarily around humans using the software, send this to them.</p>]]></content:encoded></item><item><title><![CDATA[I Don't Consume the Internet Raw Anymore]]></title><description><![CDATA[The browser is too raw. ChatGPT is too generic. I built a layer in between.]]></description><link>https://thinkdorepeat.ai/p/i-dont-consume-the-internet-raw-anymore</link><guid isPermaLink="false">https://thinkdorepeat.ai/p/i-dont-consume-the-internet-raw-anymore</guid><dc:creator><![CDATA[Ben Lynch]]></dc:creator><pubDate>Sun, 22 Mar 2026 19:34:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bUmv!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75e4af6d-10f9-4957-84d5-d7b18c5dcf8a_652x652.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I actually enjoy the internet again. I hardly ever use it in the traditional sense anymore, but I'm constantly connected to it. More than I have been in years.</p><p>For a long time that wasn't true. Finding things online used to be fun. Somewhere along the way that disappeared, and the internet became something to manage rather than explore. I don't think I noticed exactly when it happened. I just stopped looking forward to it.</p><p>What changed isn't the internet. It's that I stopped consuming it raw.</p><p>I don't go hunting for most things anymore. The system keeps working in the background, 24/7, and sends things back to me in a form I can actually use. It knows roughly how I work, what tends to matter now versus later, and when something should interrupt me immediately versus wait.</p><p>Most things I read now get routed through email first. Links, questions, notes, random half-formed thoughts. They go into my own system before they come back to me. In that sense the relationship flipped. The old internet was mostly pull. This is much more push, but on my schedule rather than the internet's.</p><p>That sounds like a weird step backwards. It isn't. The browser is too raw now. ChatGPT, Claude, and the likes are too generic.</p><p>The browser gives you the full firehose. Search results, headlines, clips, newsletters, links, replies, junk.</p><blockquote><p>The internet used to feel like a library. Now it feels like a casino attached to an industrial waste pipe.</p></blockquote><p>Once it starts feeling like that, you stop trying to browse your way out of the problem.</p><p>I started cutting years ago. Social media was the first thing to go. I haven't had a presence on any platform in years. Not a detox, not a break. I just stopped and never went back. The signal-to-noise ratio was so bad that filtering wasn't the answer. The answer was to turn it off entirely.</p><p>That taught me something. You don't fix a firehose by building a better bucket. You turn it off and only reconnect what you actually need. The interesting question is what comes back.</p><p>Social media didn't come back. But I still had the browser, and I still had the raw internet behind it. And then chatbots arrived, promising to fix the problem.</p><p>They didn't. They average the firehose back to you.</p><p>If you step back, that is a strange thing to build a workflow around. We trained giant models on vast slices of the internet and other human knowledge, then seemed surprised when the answers came back smoothed into something broadly plausible. Of course they do. The whole system is optimised for synthesis across a huge body of contradictory human output. It's impressive. It's also generic by design.</p><p>I think this explains a lot of the disappointment too. People expect magic, ask for something vague, get back something smooth and generic, and then conclude the tool is useless. A lot of the eye-rolling from serious developers about vibe coding comes from exactly this failure mode. If you don't bring context, standards, taste, and the ability to tell when the model is bluffing, you get slop. That isn't a weird edge case. It's the default outcome.</p><p>The term "AI slop" is really just a name for what happens when you treat plausible synthesis as if it were judgment. We are still learning how to use these systems properly. The model can do a surprising amount, but only if the human using it knows where the generic output stops being helpful and starts quietly breaking the work.</p><p>That's useful up to a point. But generic AI has the same problem the web has. Too much noise, not enough context. It can produce clean sentences about almost anything, but it doesn't know what's relevant to me, what I've already read, what I believe, what work I'm actually doing, or what standards I want applied.</p><p>That's the weird part. People are effectively asking a model trained on everybody to answer specifically for them. Sometimes it can. Often it gives you the median internet with better formatting.</p><p>What's useful now isn't more access. It's the same thing I learned from cutting social media, applied to everything else. Default to silence. Only reconnect what earns its way back. Then wrap what survives in enough context that it's actually useful to you, not just plausible to anyone.</p><p>The models will keep improving, and that matters. But for a lot of real work, they already crossed the threshold of being genuinely useful. The bigger gain now is not squeezing out another jump in raw intelligence. It's adding your own context, memory, and standards so the output becomes useful to you.</p><p>That's the shift I keep coming back to. Raw model intelligence is becoming abundant. Context is the scarce layer. Once that happens, the internet and the software built on top of it stop being one-size-fits-all. They start reorganising around the individual user.</p><p>I want fewer tabs, fewer interruptions, and better starting points.</p><p>So I stopped using both of them raw.</p><h2>A layer in between</h2><p>For me, most of it runs through email.</p><p>If I read something interesting, I forward it in. If I have a thought, I email it. If I want something held for later, pressure-tested, or connected to other ideas, same thing.</p><p>That sounds almost stupidly boring, which is part of why it works. Email still feels almost analogue to me. Nobody expects an instant reply. A thought can sit there for a bit. There's room to think before reacting.</p><p>The specific interface matters less than what it enables. Email works for me because it handles links, questions, notes, threads, files, and random half-formed thoughts. It doesn't ask me to adopt a new habit. It gives me a simple way to route information into something that knows my context, without dragging me into somebody else's app or cadence. It's one of my two primary mediums right now because it fits how I already work. The point isn't email. The point is having a way to get things into a system that knows who you are.</p><p>That context is the important bit. The same underlying models are available to a lot of people now. The difference is what gets wrapped around them. Two people can start with the same intelligence and end up with completely different systems because the scarce thing is no longer access to the model. It's the context surrounding it.</p><p>The system knows what I'm working on, what I've written before, what themes keep recurring, which ideas are live, which opinions I'm still forming, what my blind spots are, what "good" usually looks like in my world, and increasingly what can wait until later. It also knows the business side: what deals are in play, what's been committed to whom, what the priorities are this week, what's behind schedule. That extension feels natural to me. Building and running a software company is still mostly information, decisions, and coordination flowing through the internet. Once I stopped consuming the internet raw, it was obvious the same layer should sit across the rest of the work too. At that point it stops being just a reading filter and starts looking a lot like a company operating system. The model is just the engine. The value comes from everything wrapped around it.</p><p>Without that, AI is mostly a parlour trick. Impressive, fast, and often useless.</p><p>And yes, there are obvious objections to all this.</p><p>It can sound like workflow overengineering. A browser, a bookmarks app, and a chatbot can already get you some of the way there. The trust boundaries are real. And no, this thing is not perfect.</p><p>I'm not claiming I found the final interface to the internet. I built a starting point that already works better for me than the defaults. That's enough to keep iterating. The bigger question is why the defaults got bad enough that this became worth building at all.</p><h2>Why the standard channels broke</h2><p>The old internet rewarded searching. You could go looking for something and have a decent chance of finding signal. That world is disappearing.</p><p>Now the internet is full of summaries of summaries, clipped opinions, SEO sludge, and increasingly machine-written content. Even when the information is technically correct, it's often packaged in a way that destroys judgment. You consume the shape of an idea without doing the work of forming a view.</p><p>That degradation is becoming visible in odd ways. In New Zealand, we are now seriously discussing whether under-16s should be kept off social media altogether. Whatever you think of that idea, it points at the same underlying question: when did the internet get bad enough that banning access for kids started sounding like a reasonable policy response?</p><p>That's the part that worries me most. Not the noise itself, but how invisible the damage is. A chatbot gives you a fluent answer and you move on. The model doesn't know which source matters more. It doesn't know the hidden context behind your question. It doesn't know that one line in the answer is the line that quietly breaks the whole thing. And because it sounds confident, you often don't notice the weakness until it's too late.</p><p>Once the default interfaces get this noisy, you don't need a better feed. You need a filter with memory.</p><h2>What my system actually does</h2><p>The closest analogy is a chief of staff.</p><p>Not a chatbot. Not an assistant that waits for instructions. A chief of staff who knows what I'm working on, what matters this week, what can wait, and what I'd want to see immediately. Someone who triages, routes, briefs, and shields me from noise so I can focus on the decisions that actually need me.</p><p>That's what the system does. It sits between me and the firehose.</p><p>A forwarded article doesn't sit in a read-later pile until I forget it. It gets turned into a usable note. A half-baked thought from a walk doesn't disappear into some notes app graveyard. It joins other ideas that might eventually become a post. A question doesn't get answered from scratch every time. It gets answered against the accumulated context of what I'm already doing and how I tend to think. Something non-urgent doesn't interrupt me just because it arrived. It waits, gets filtered, and comes back later as part of a summary.</p><p>And like any good chief of staff, it delegates. When something needs deeper work, it gets routed to a specialist. Writing has its own context, its own memory of my voice, its own rules. Research has different tools and standards. Inbox triage follows different logic than idea development. Business operations, client work, finance, and planning each have their own brief. When I find a gap, I spin up a new one.</p><p>That's how the system grows. Not by getting bigger and doing everything at once, but by adding focused capability where it's actually needed. The chief of staff decides what goes where. The departments do the work. I make the calls.</p><p>It keeps working in the background too. While I'm asleep, out walking the dog, or in the middle of something else, it's still sorting, refining, and connecting things. Good systems know when to interrupt and when to leave you alone. They fit around your schedule. They don't demand you live inside theirs.</p><p>There is an obvious trust boundary here. You can't just pour your whole life into a black box and hope for the best. What goes in, what stays out, what gets to act, what only gets to brief you. If you get that wrong, the whole thing becomes creepy fast.</p><p>And it does get things wrong. I've had it quietly filter out something I needed because it didn't match the pattern of what I usually care about. I've had it surface connections between ideas that felt insightful but were actually just reinforcing what I already believed. The more context it accumulates, the better it gets at telling me what I want to hear.</p><p>That's not a temporary flaw. It's a permanent feature of any system like this. A filter trained on your own patterns will always tend toward confirmation. The question isn't whether that happens. It's whether you notice when it does and override it. That's the job that never gets automated. Not because the technology isn't good enough, but because the whole point of the system is that a human stays in the loop making the calls.</p><p>Once an agent has memory, rules, and initiative, the trust question stops being abstract. You have to decide what it is allowed to know, what it is allowed to do, and where the human line stays.</p><p>That's why this has to be iterative. You don't get the whole system right on day one. You tighten the boundary, see what was useful, see what was noisy, and keep refining.</p><p>It also gets calibrated through use. Every correction, override, rewrite, and ignored suggestion teaches it a bit more about what I actually care about, what I don't, and where its instincts are wrong. That doesn't remove the need for judgment. It just means the filter compounds over time.</p><p>The output is still not final. That's the point. It gets me to a better starting point.</p><p>Once you think about it as routing instead of browsing, the interesting question becomes which interfaces actually survive.</p><h2>The interfaces that actually survive</h2><p>The interesting part is not that I built some software around AI. It's which interfaces ended up mattering, and the principle behind why.</p><p>Everyone wants the future to arrive as a shiny new app. In practice, the winning interfaces are the ones that disappear into behaviour you already have. Not the ones that ask you to change your habits. The ones that meet you where you are.</p><p>For me, right now, that is email and the terminal.</p><p>Email handles the routing. It's asynchronous. It works from anywhere. It doesn't demand I sit there and respond in real time. The terminal handles the focused work. When I need to build something, fix something, or think through a problem with real depth, I open a terminal with the right specialist connected to the same context. Between them, I rarely need a browser at all.</p><p>If you'd told me a year ago that my entire working setup would collapse to two interfaces, I wouldn't have believed it. I run a company through this system. Writing, research, operations, client work, planning. The stack didn't get more complex. It got radically simpler.</p><p>That may not be the final shape forever. It's just the shape that currently fits how I already work.</p><p>The pattern matters more than the tools. The useful AI layer is the one that slots into how you already work, carrying enough context about you and your work to actually filter rather than flood. The moment it asks you to change how you operate to match the tool, it's already lost.</p><p>The interface is just the entry point. The real value sits underneath, in how well the system is shaped around the person using it. That's the real principle. Not "use email." Not "learn the terminal." Find the interface you'd use without thinking and make it the entry point to a system that knows your context.</p><h2>This is not just about me</h2><p>What I've described is a personal system, but it points at something bigger. I wrote recently about <a href="https://thinkdorepeat.ai/p/new-zealands-three-year-window">New Zealand's three-year window</a>, the argument that AI-native companies built clean from day one will beat legacy companies trying to retrofit AI into old structures. The same logic is showing up here at the interface layer.</p><p>If one person can collapse their working setup to two interfaces they already use, the question becomes: what happens to all the software that used to sit in between? The market is already starting to ask. Software stock valuations have been hammered. Companies are cutting SaaS licences and replacing them with AI tools. The direction is early and uneven, but it's real.</p><p>Underneath that is a simpler shift. Once the base intelligence is good enough, software stops competing mainly on access to capability and starts competing on how well it fits a specific user. Their context. Their workflow. Their standards. Their timing.</p><p>Knowledge used to compound mainly through acquisition. The person who knew more than everyone else had a durable edge because information itself was scarcer and the asymmetry mattered more. That still matters, but less than it used to. The person who can tell what matters, what doesn't, and what should be ignored entirely is often in the stronger position.</p><blockquote><p>When knowledge is abundant and retrieval is cheap, the advantage shifts toward filtration.</p></blockquote><p>I don't think the interesting part is which apps survive and which don't. The interesting part is what becomes scarce when the interface layer gets thinner. And I think the answer is judgment. More specifically, trusted judgment.</p><h2>The judgment still has to stay with me</h2><p>None of this means I outsourced thinking.</p><p>If anything, it made the boundary clearer.</p><p>The system is very good at helping me organise, retrieve, connect, summarise, and draft. It is very good at turning messy inputs into usable material and giving me something to react to across work, reading, and day-to-day decisions.</p><p>It is not good at being me.</p><p>I see this most clearly in writing because the failure mode is obvious. I can feed a set of notes into a model and get back a structurally sound draft. The sentences work. The flow is reasonable. The argument seems balanced. And sometimes that is exactly how I know it's wrong.</p><p>The model has a bias toward sounding plausible. It wants to smooth the edge off a real opinion, flatten lived experience into general advice, and produce something that nobody could strongly object to. That's great if your goal is harmless sludge. It's terrible if your goal is saying something true.</p><p>The useful work is noticing where the output became generic, where it lost the tension, where it stopped sounding like an actual person and started sounding like an educated average. That's the part I still have to do. It will always be the part I have to do.</p><p>Some people hear that and think it's a limitation. I think it's the design. A system that needs your judgment at the centre isn't half-finished. It's correctly built. The moment you take the human out of this loop, you're back to consuming averaged information with extra steps.</p><h2>What this seems to imply</h2><p>I think the conclusion is pretty simple.</p><p>The browser is too raw. ChatGPT and Claude are too generic. A lot of the software sitting between them is about to look like unnecessary scaffolding.</p><p>The next layer is not one better universal app. It's context wrapped around the individual user.</p><p>You don't need to be a founder or an engineer to feel this. Anyone with too much inbound information and not enough time eventually wants the same thing: less noise, more continuity, and fewer low-value decisions.</p><p>But I should be honest about who can act on it today. What I've described requires technical skill to build. I wrote the code, configured the agents, designed the routing. Most people can't do that yet. The desire for better filtration is universal. The ability to build your own is not.</p><p>That gap matters more than the technology itself. If good filtering stays something only technical people can build for themselves, it becomes another way the information-rich get richer. The people who most need help cutting through noise are the least equipped to build the tools to do it. That's not a footnote. It's probably the most important question in this whole space: who gets access to trusted filtration, and who stays in the firehose?</p><p>That's the problem I keep coming back to. But I think the shape of the solution is right even if my specific implementation isn't for everyone: raw information on one side, human judgment on the other, and a context-rich system in the middle doing the filtering. The interface into that system should be whatever you already use without thinking. For me it's email. For you it might be something completely different.</p><p>I don't think the future is everyone using the same internet through the same apps and the same chatbots. I think it's each person interacting through a layer shaped around their own work, standards, schedule, and judgment.</p><p>And I don't think the human in that loop is a temporary flaw before the system becomes fully autonomous. For anything important, it's the point.</p><p>That's already how I work now.</p><p>I don't want more information. I want better filtration.</p><p>I don't want AI to tell me what to think. I want it to help me hold more context without drowning in it.</p><p>And I don't think I'm going back to consuming the internet raw.</p><div><hr></div><p>I'm Ben Lynch. I think about founders, AI, and what happens next from New Zealand. Say hello at ben@thinkdorepeat.ai.</p><p>New here? Start with <a href="https://thinkdorepeat.ai/p/start-here">Start Here</a>. It's the quickest way to understand what I'm building and why I write.</p><p>If this made you think of someone who's still drowning in the firehose, send it to them.</p>]]></content:encoded></item><item><title><![CDATA[The Best Time to Build on Bank Data]]></title><description><![CDATA[Open banking is live. AI makes building cheap. The window is now.]]></description><link>https://thinkdorepeat.ai/p/the-best-time-to-build-on-bank-data</link><guid isPermaLink="false">https://thinkdorepeat.ai/p/the-best-time-to-build-on-bank-data</guid><dc:creator><![CDATA[Ben Lynch]]></dc:creator><pubDate>Tue, 17 Mar 2026 18:58:19 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bUmv!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75e4af6d-10f9-4957-84d5-d7b18c5dcf8a_652x652.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>Something just changed</h2><p>On 1 December 2025, New Zealand's Consumer Data Right went live. For the first time, bank data and bank-initiated payments started becoming available to third-party developers through authenticated APIs, with customer consent.</p><p>That might sound like a regulatory footnote. It's not. The industry has been talking about this for years. What's missing isn't awareness. It's builders.</p><p>This matters because it changes what a small team can build, and how fast they can build it. First, some context on why I care, and the obvious conflict of interest.</p><h2>Ten years for the "easy" part</h2><p>I'm the founder and CEO of <a href="https://www.akahu.nz">Akahu</a>, an open banking platform in New Zealand. We provide the APIs that let developers access bank account data and initiate payments across the major NZ banks we support.</p><p>It took ten years to get here.</p><p>When I started, there was no Consumer Data Right. No authenticated bank APIs. No open banking framework. We had to reverse-engineer access to bank data, building and maintaining connections without any support from the banks themselves. It was painstaking work. The whole time, proper bank-authenticated access was "coming soon." It was always coming soon.</p><p>Had I known it would take a decade, I never would have started. That's the honest truth.</p><p>But the hard part is now largely done. The infrastructure exists. The regulatory framework is live. The bank connections exist. What took us ten years and a team of engineers to build, a developer can now start experimenting with in a weekend.</p><p>That's not a small change. That's a different universe.</p><p>Also, yes, this is self-serving. I built part of the infrastructure and now I'm telling people to build on top of it. But that doesn't make the underlying point any less true. If you care about building products around money, the cost and complexity of getting started just collapsed.</p><h2>What you can actually access</h2><p>Through Akahu, developers can now access account information, balances and transactions across the major NZ banks we support. They can also initiate payments to NZ bank accounts with authenticated, consented access.</p><p>That's also where the distinction between open banking in theory and building a product in practice starts to matter. Direct open banking access gives you the raw bank data. That's useful, but raw data still leaves a lot of work to do. Different banks structure things differently. Transaction descriptions are messy. And the bank data layer doesn't do the work of categorising transactions for the product you're trying to build.</p><p>Part of Akahu's value is cleaning that up across banks, normalising it, and enriching and categorising the transaction data so developers can spend more of their time building the product on top. The raw access matters. Making it usable matters just as much.</p><p>Historically, this data sat inside the customer's own bank. Think about what that means. Every financial product, every budgeting app, every lending decision, every business tool that touches money, all of it has been built on incomplete information. Your bank sees your accounts with them. Very few products see the full picture. Now they can.</p><p>I saw a version of this up close during my time in Xero's bank feeds team, where I got a first-hand view of how hard bank data access used to be before CDR. One of the reasons Xero became so valuable to small businesses was that it got bank transaction data flowing into the product. Without that overnight bank data, it's a very different proposition. Reconciliation is harder. The product is less useful. The habit is weaker.</p><p>That's the part worth paying attention to. For a long time, access to that data was a moat in itself. You had to negotiate for it, stitch it together, and maintain it. Now that access is becoming more available. That doesn't mean the moat disappears. It means the moat moves up the stack, into the product you build on top of the data.</p><h2>Bank without being a bank</h2><p>Here's how I think about the opportunity. You can now build products that sit on top of bank data and become the primary relationship a customer has with their finances, without being a bank. You're not taking deposits. You're not trying to become a regulated bank from day one. That is a radically easier place to start. It does not mean trust, consent, privacy, and compliance suddenly stop mattering. It means the rails are more open than they were before.</p><p>You're not replacing banks. Banks are great at holding money, settling payments, and managing risk. They also carry constraints most startups don't, which makes it harder to build focused software for narrow use cases quickly. They've had the customer data for decades and, in many cases, delivered broad generic experiences rather than products people love.</p><p>The application layer, the products that actually help people and businesses make better decisions with their money, is wide open. And for the first time, anyone can build it.</p><p>I don't want to limit what "bank without being a bank" means. It could be a personal finance tool that actually sees your full picture across every account. A lending product that uses transaction data as one input instead of relying only on blunt credit signals. A business cashflow tool that forecasts from actual bank feeds, not spreadsheets. A Pay by Bank checkout that bypasses card schemes entirely.</p><p>If I were looking for whitespace, I'd start with problems where money movement is central but the software is still awful:</p><p>- Cashflow forecasting for small businesses that live close to the edge - Vertical software for accountants, mortgage advisers, property managers, and payroll providers - Consumer products that help people avoid fees, smooth income, or automate savings using real transaction patterns</p><p>The point is that the data is there and the permission model exists. What gets built on top of it is up to builders.</p><h2>The convergence</h2><p>This would be interesting on its own. But it's landing at the same time as something else.</p><p>I wrote a few weeks ago about <a href="https://thinkdorepeat.ai/p/new-zealands-three-year-window">New Zealand's three-year window</a>, the argument that AI-native companies built from scratch will outcompete legacy businesses that try to retrofit AI into existing structures. The advantage goes to whoever builds clean, not whoever adopts fastest.</p><p>Open banking is a specific, tangible version of that argument.</p><p>A traditional fintech trying to build on open banking data has to navigate existing codebases, existing product assumptions, existing teams, existing business models. They'll spend months in planning meetings and "discovery". By then, the window has moved.</p><p>A solo founder or a small team building AI-native from day one doesn't carry any of that weight. They can prototype in a weekend using their own bank data. They can have something working before a large company has finished writing the brief. AI agents are fundamentally changing how much a small team can do. Fewer people with the right tools can drive enormous economic impact. Open banking is where that thesis meets a specific, underserved market.</p><p>The data is available. The tools to build are better than they've ever been. The incumbents are structurally slow. This is a window, and it's open right now.</p><h2>What I'd tell the version of me from ten years ago</h2><p>If I were starting today instead of 2015, here's what I'd do differently. These are lessons that took me years to learn.</p><p><strong>Understand your revenue model on day one.</strong> Not "we'll figure out monetisation later." Who is paying you, for solving what problem? I spent too long building something useful without being clear on the business model underneath it. A side project doesn't need revenue. A business does. Know which one you're building, and when you're ready to cross over.</p><p><strong>Don't just solve your own problems.</strong> My starting point was scratching my own itch. That's a great way to find a real problem. It's a terrible place to stop. The thing you build for yourself is a prototype. The business comes from understanding who else has that problem and whether they'll pay to make it go away.</p><p><strong>Distribution is the hard part, not the product.</strong> This is the one that gets most developers. AI has made building absurdly easy. You can ship a working product in a weekend. Finding customers is still hard. The best products I've seen in this space don't try to acquire customers directly. They partner with businesses that already own the customer relationship. Accounting firms, financial advisers, property managers, payroll providers. Build for their customers, distribute through them. That lesson hasn't changed in ten years.</p><p><strong>Regulation is a moat, not just a hurdle.</strong> Most developers see financial regulation and run the other way. That's exactly why it's valuable. If you lean into the complexity of consent, data privacy, and financial compliance, you build something that's genuinely hard to copy. The easy stuff gets competed away. The hard stuff compounds.</p><p><strong>Start with existing infrastructure.</strong> You don't need to build the rails. They exist. Every hour you spend rebuilding something that's already available is an hour you're not spending on the thing that makes your product different. Build on top. The value is in the application layer now.</p><p><strong>Talk to customers before you write code.</strong> This one is harder than ever, because AI makes it so tempting to just build. You can have a working prototype before you've spoken to a single potential customer. Resist that urge. The fastest way to waste a month is to build something nobody wants. The fastest way to build something people want is to ask them first.</p><p><strong>The NZ market is small, and that's a feature.</strong> You can get in front of a meaningful share of your market quickly. Try doing that in the US. Small market means fast feedback loops. You can test an idea, get real responses, and iterate in weeks. Use that. It's an advantage founders in larger markets would kill for.</p><p><strong>Be honest about what's still hard.</strong> The rails existing doesn't mean the problem is solved. Consent flows still matter. Trust still matters. Distribution still matters. If you're expecting open banking to remove all the friction, you're going to be disappointed. It removes one category of friction. That's enough.</p><h2>The real moat</h2><p>Here's the thing most people miss about this opportunity.</p><p>Access to bank data is more available than it's ever been. Building products is, thanks to AI, getting steadily cheaper. So where's the defensibility?</p><p>The products that win won't just have a pretty UI or a clever demo. They'll solve a recurring problem well enough that users keep coming back. In doing that, they'll accumulate context. Every transaction categorised, every payment pattern recognised, every workflow completed, every business insight surfaced, all of it feeds back into a product that gets more useful over time. That's a moat that doesn't exist on day one. You earn it through retention.</p><p>Most banks probably won't build this. They have the data, but usually not the speed, focus, or incentive to turn that data into narrow, high-utility software. A competitor can copy your features more easily than they can copy your accumulated understanding of a customer's financial life.</p><p>Build the product that gets better with use. That's the thing that compounds.</p><h2>Start this weekend</h2><p>Here's the part where I'm supposed to wrap up with something grand about the future of financial services. Instead, I'll make it practical.</p><p>If you want to experiment, Akahu offers a personal app tier. You sign up, connect your own bank accounts, and start building with real data. Your data. No commercial agreement needed.</p><p><a href="https://developers.akahu.nz/docs/getting-started">Get started here.</a></p><p>When you get stuck, the <a href="https://akahu--help.slack.com/join/shared_invite/zt-opjjvo7q-QjHKN43DAV4Nv6og1dF15g#/shared-invite/email">Akahu developer Slack</a> has our team, Josh, David, Olly and Will, alongside a growing community of people building on open banking in New Zealand.</p><p>Build the thing you wish existed for your own finances. Show it to a few friends. If they want it too, you might have something. If they'd pay for it, you definitely do.</p><p>You don't need to quit your job. You don't need to raise money. Build it on nights and weekends until the idea proves itself.</p><p>Ten years ago, I had to reverse-engineer bank data, raise capital, and hire a team just to get started. You need an API key and a Slack invite. Don't take ten years.</p><div><hr></div><p>I'm Ben Lynch, founder and CEO of Akahu and Dolla. I think about founders, AI, and what happens next from New Zealand. Say hello at ben@thinkdorepeat.ai.</p><p>New here? Start with <a href="https://thinkdorepeat.ai/p/start-here">Start Here</a>. It's the quickest way to understand what I'm building and why I write.</p><p>If this made you think, forward it to someone who'd enjoy it. Especially if they've got a side project idea they keep talking about but haven't started.</p>]]></content:encoded></item><item><title><![CDATA[Start Here]]></title><description><![CDATA[If you're new, this is the quickest way to understand what I'm building and why I write.]]></description><link>https://thinkdorepeat.ai/p/start-here</link><guid isPermaLink="false">https://thinkdorepeat.ai/p/start-here</guid><dc:creator><![CDATA[Ben Lynch]]></dc:creator><pubDate>Mon, 09 Mar 2026 23:08:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bUmv!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75e4af6d-10f9-4957-84d5-d7b18c5dcf8a_652x652.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you've landed here from one of my essays, this is the frame.</p><p>I'm the founder of two fintech companies from a small town north of Auckland. One, <a href="https://www.akahu.nz">Akahu</a>, is an open banking infrastructure platform with a small team. The other is an AI-native accounts payable company that's just me and the system.</p><p>That's not a brag. It's the thesis.</p><p>The economics of building software companies have changed so much in the last few years that a lot of the old assumptions already look shaky. How many people you need. How much capital you raise. What work actually needs a human. I'm testing that in the real world, with real customers and real stakes.</p><p>Think Do Repeat is where I write about what I'm learning.</p><h2>The name is the method</h2><p>Think. Read widely. Build mental models. Question the default assumption. Most people skip this step because doing feels more productive. It isn't. Bad thinking leads to busy work. Good thinking leads to leverage.</p><p>Do. Build something. Ship it. Put it in front of customers. Test the thinking against reality. I've been building software companies for over a decade. The doing is where the learning actually happens.</p><p>Repeat. Write it down. Not to teach anyone, but to force clarity. If I can't explain what I learned, I probably didn't learn it. Writing is thinking made visible.</p><p>That's the loop. Think, do, repeat. Everything I write here comes from somewhere in that cycle.</p><h2>Who I am</h2><p>I'm Ben Lynch, based in Matakana, New Zealand. Former Xero engineer, now a solo founder.</p><p><a href="https://www.akahu.nz">Akahu</a> is my first company. It's open banking infrastructure, the rails that let apps connect to bank accounts. A small team runs it day to day.</p><p>Most of my current product development work is in an AI-native accounts payable platform. Invoices come in, the system extracts the data, codes them, gathers the organisational context, and posts them into Xero. It's like having your own internal AP team, without the headcount. It's just me and the system.</p><p>I didn't set out to build companies this way. I've always had more ideas than execution capacity. AI changed that equation. I kept not hiring, kept finding that AI could handle the next thing I thought I'd need someone for, and eventually realised this wasn't just my situation. It was the pattern. For people with more ideas than hands, AI is the leverage that changes the game.</p><p>I have no social media presence. I don't go to conferences or networking events. I'd rather put the thinking into something worth reading than scatter it across platforms optimised for reaction. This publication is my only public outlet, and that's deliberate.</p><h2>What to expect</h2><p>I write about what I'm actually building and what it seems to imply.</p><p>That usually means AI as leverage, solo founding, building from New Zealand, and the gap between how companies say they work and how they actually work. Sometimes that becomes a post about economics or hiring. Sometimes it's an argument with a whole category of startup advice. Sometimes it's just a note from the middle of using these tools every day and seeing what holds up.</p><p>This isn't a fintech newsletter, and it's not a startup advice column. I'm not trying to build a personal brand or sell a course. It's one person thinking out loud, grounded in work with real consequences. Some of it will be wrong. I'll say so when I figure that out.</p><p>Think of the publication as one connected argument, not a collection of posts. The essays go deeper on individual parts of the same thesis: AI changes the economics of building, small teams can do far more than they used to, and a lot of accepted wisdom about software companies is about to age badly.</p><h2>How this publication works</h2><p>This publication is itself part of the experiment. The pipeline behind it runs on AI infrastructure I built. My job is to think, judge, and review. The rest is execution.</p><p>Ideas and reading get routed into that system. It organises them, finds patterns, drafts when there's enough substance, and feeds audience signals back into the next round. I still do the thinking. I still decide what's true, what matters, and what's worth publishing.</p><p>That's the broader point. If judgment stays human, one person should be able to operate with something closer to a chief of staff and a full team at their disposal, without the actual headcount. This publication is one visible proof point.</p><h2>Why write at all</h2><p>Writing forces me to think more clearly than I otherwise would. An idea that seems solid in my head often falls apart when I try to put it into sentences. That alone makes it worth doing.</p><p>But writing is also distribution. A good post gets forwarded to someone who needs to hear it. That opens doors no cold email ever could.</p><p>If something here makes you think, forward it to someone who'd find it useful. If you disagree with something, reply to any post and tell me why. I'm thinking out loud and seeing who turns up.</p><div><hr></div><p>I'm Ben Lynch. I write about founders, AI, and what happens next from New Zealand. Say hello at ben@thinkdorepeat.ai.</p><p>If this gives you the frame, the rest of the publication fills in the argument.</p><p>If you know someone who'd care about where software and company-building are heading, send them here.</p>]]></content:encoded></item><item><title><![CDATA[New Zealand's Three-Year Window]]></title><description><![CDATA[AI isn't taking jobs. It's making companies that never need them.]]></description><link>https://thinkdorepeat.ai/p/new-zealands-three-year-window</link><guid isPermaLink="false">https://thinkdorepeat.ai/p/new-zealands-three-year-window</guid><dc:creator><![CDATA[Ben Lynch]]></dc:creator><pubDate>Sun, 08 Mar 2026 00:34:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bUmv!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F75e4af6d-10f9-4957-84d5-d7b18c5dcf8a_652x652.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2>The shift nobody's measuring</h2><p>Block laid off 4,000 people last week. Nearly half their workforce, gone. Stock soared.</p><p>The headlines wrote themselves. "AI takes jobs." "Mass layoffs hit tech." You've seen the cycle. Block, Amazon, Fiverr, Klarna. Every few weeks, another round of cuts, another wave of panic.</p><p>But layoffs aren't the story. They're the visible, countable, headline-friendly version of a much bigger shift. The real story is quieter. It doesn't make the news, because there's nothing to report.</p><p>The real story is the jobs that were never created.</p><p>The startup that launched last month with 5 people instead of 50. The team that was supposed to grow from 20 to 40 but just... didn't. The job posting that was drafted, reviewed, and quietly deleted because someone realised AI could handle it. Nobody writes an article about a job that was never posted. There's no redundancy package for a role that never existed. It's invisible by definition.</p><p>But it's already everywhere. Shopify's CEO sent a <a href="https://www.cnbc.com/2025/04/07/shopify-ceo-prove-ai-cant-do-jobs-before-asking-for-more-headcount.html">memo to the entire company</a>: before asking for more headcount, teams must prove AI can't do the work. That's not a layoff. It's a policy of never creating the position in the first place. Harvard Business Review <a href="https://hbr.org/2026/01/companies-are-laying-off-workers-because-of-ais-potential-not-its-performance">surveyed over a thousand executives</a>. Roughly a third said they'd slowed hiring. Not fired anyone. Just stopped creating new roles. The headline layoffs are speculative theatre. The quiet hiring freezes are already real.</p><p>Every country will feel this. Not every country will see the opportunity in it.</p><h2>Why big companies can't just adapt</h2><p>The obvious response is: fine, big companies will use AI, cut headcount, become efficient. Block is trying it. Shopify is trying it. Everyone's trying it.</p><p>But you can't take a company built for 10,000 people and run it with 5,000 just by removing half the humans. The architecture is wrong. The code was written by those people, for those people. The processes, the management layers, the decision-making structures, the institutional assumptions baked into every system. That's all legacy. And legacy doesn't disappear when you fire the people who built it. Large organisations resist change, imitate peers, and expand for the sake of expanding. Every foolish initiative gets backed by detailed studies. That inertia doesn't vanish because you install a chatbot.</p><p>Klarna is the cautionary tale. They <a href="https://fortune.com/2025/10/10/klarna-ceo-sebastian-siemiatkowski-halved-workforce-says-tech-ceos-sugarcoating-ai-impact-on-jobs-mass-unemployment-warning/">roughly halved their workforce</a>, from over 5,000 to under 3,000, mostly through hiring freezes and attrition. They launched an AI chatbot they claimed was doing the work of hundreds of customer service agents. Then customer satisfaction tanked. Their CEO had to <a href="https://mlq.ai/news/klarna-ceo-admits-aggressive-ai-job-cuts-went-too-far-starts-hiring-again-after-us-ipo/">publicly admit they'd gone too far</a>. They're now rehiring humans.</p><p>This is creative destruction in real time, except Klarna tried to skip the destruction part. They wanted the new economics without rebuilding the architecture. It doesn't work that way. You can't just pull humans out and slot AI in. The system was designed around them.</p><p>The advantage doesn't go to whoever adopts AI fastest. It goes to whoever builds from scratch.</p><h2>The new architecture</h2><p>A company built today, AI-native from day one, doesn't carry any of that weight. No legacy code. No management layers to flatten. No culture to transform. No "change management" programme. No institutional memory that fights every change. You just build it clean.</p><p>I know this because I'm living it. What it takes to build a software company has changed drastically in just a few years, and most people haven't caught up to what's possible.</p><p>I should be upfront: this argument is self-serving. I'm a founder. A world with more founders and fewer employees is a world that works well for people like me. Not everyone will be a founder, and not everyone should be. But the shift is happening regardless of what anyone thinks about it. The question is whether we position for it or pretend it isn't coming.</p><p>Every month the tools get better, and the gap between "built with AI from day one" and "retrofitting AI into a legacy organisation" gets wider. The new entrant doesn't just have lower costs. They have a fundamentally different architecture. They move faster. They iterate faster. They don't have committee meetings about whether to adopt AI. Every large company eventually calcifies: stasis, followed by irrelevance, followed by death. The AI-native company never accumulates the bureaucracy that causes it.</p><p>There's something else these companies have that legacy businesses don't. When a recession hits, the 500-person company has to lay off 200 people. The 3-person AI company cuts its API bill. The shock that kills the legacy business makes the lean one relatively stronger. That's not resilience. It's a structural advantage that grows with every disruption.</p><p>Every technology shift in history has followed this pattern, and every one had a window that opened and then closed.</p><p>In the late nineties, anyone who put a business on the internet had a shot. Amazon, Google, eBay. All founded within a few years of each other. By 2001, the window was shut. Then cloud: take legacy software that shipped on CDs and make it cloud-native. Salesforce beat Siebel. Netflix streaming killed Blockbuster. That window ran roughly 2006 to 2012 before incumbents caught up. Then mobile: take a cloud business and put it on a phone. Uber, Instagram, WhatsApp. All founded between 2009 and 2012. By 2014, the major categories were claimed.</p><p>Each window was about three to five years. And each one was shorter than the last, because incumbents learn faster and capital deploys faster. The AI-native window opened around 2023. If the pattern holds, it closes around 2027 or 2028. We're already halfway through.</p><p>This is already happening. Small teams, built on AI, competing globally from day one. The question isn't whether big companies adapt. It's which countries produce the most founders during this window, because once it closes, the categories are locked.</p><h2>The economic ground is shifting</h2><p>The deeper problem is structural, and you have to think past the first-order effects to see it. The first-order effect of AI is obvious: companies become more efficient. But follow the chain. Companies pay wages, people spend those wages, companies earn revenue, companies hire more people. Every government budget, every pension model assumes some version of that loop continuing.</p><p>If companies can grow revenue without growing headcount, the loop breaks. That's the second-order effect nobody's modelling.</p><p>GDP keeps rising. Corporate profits look great. But household income growth flatlines. Governments depend on income tax and consumption tax, both of which require people earning and spending. If the earners don't exist, the tax base erodes while the economy looks fine on paper. Third-order: governments can't fund services, social cohesion frays, and the political consequences arrive a decade after the economic ones.</p><p>But there's another way to look at this. A traditional company doing $10 million in revenue might employ 80 people, pay modest salaries, and generate thin margins. An AI-native company doing $10 million might have 3 founders, pay themselves well, and run at 70-80% margins. The tax mix changes. Governments collect less PAYE, but more corporate tax on higher profits and more income tax from founders earning well into the top bracket. And every dollar of that revenue is coming from global customers and being spent locally.</p><p>That's not a clean swap. Fewer earners means less total income tax, and no government has figured out how to make that maths work yet. But the alternative isn't 80 well-paid employees. It's those 80 roles never existing in the first place. The question isn't whether the old tax model survives. It won't. The question is which countries design a new one first.</p><p>There's something else. AI-native software companies don't need venture capital. Not in the traditional sense. When your total cost base is AI subscriptions and cloud infrastructure, the capital required to get to market drops by an order of magnitude. Most can bootstrap entirely or get to revenue before raising anything at all. Those that do raise can do it with a small angel round, and the money stays onshore.</p><p>This matters enormously. The traditional startup model is a money-extraction machine. Kiwi founder raises from Silicon Valley VCs, gives away most of the company across a few funding rounds. Company grows. Company exits. The vast majority of that exit value flows to investors in San Francisco. The founder gets a slice, the lawyers get a slice, and the country that produced the founder gets almost nothing.</p><p>The AI-native model inverts this. Founder bootstraps or takes a small angel cheque from a local investor. Keeps 90-100% of the company. Generates global revenue at high margins. No offshore VC syndicate taking the lion's share. The wealth accumulates in the country instead of being extracted from it. Code and media are the two forms of leverage that don't require anyone's permission. You don't need a VC's blessing to write software or publish content. An AI-native founder has both, and AI amplifies both. That's the specific mechanism behind "5 people instead of 50." It's not just efficiency. It's a fundamentally different kind of leverage.</p><p>And here's something the traditional model never allowed: these founders might not sell at all. In the VC model, exit pressure is baked in. Investors need liquidity. The fund has a timeline. The founder is on a clock whether they want to be or not. Remove the VC and that pressure disappears entirely. Every decision becomes reversible. That's not a minor structural difference. It's optionality at the company level.</p><p>The usual reason founders sell anyway is that the company outgrows them. It stops being fun. You didn't start a company because you love managing 200 people and sitting in performance reviews. But a company that stays at three people and AI never becomes a management problem. It just keeps throwing off cash. Pay yourself well, pay dividends to your angel investors, keep building. A founder paying themselves well from a company they enjoy running has no reason to sell. And for the economy, that's actually better than an exit. It's steady, ongoing income tax and local spending, year after year, instead of a one-off event. Dividends, not exits.</p><p>New Zealand's investment landscape is actually built for this. We've always had plenty of angels but very few VCs. That used to be a weakness. It meant ambitious founders had to go offshore to raise serious capital. In this model, it's an advantage. The capital requirements match what's already here. Local angels writing small cheques into capital-light, high-margin companies. No need to go to Sand Hill Road. No need to give away the company to get it off the ground.</p><p>Scale that up and the maths gets interesting. A few thousand of these companies across the economy, with even five hundred of them doing $3 million in global revenue at 75% margins. That alone is $1.5 billion in export earnings, with over a billion flowing directly to founders and the local economy. More tax revenue than a handful of large employers. More resilience. No single company failing takes out a town.</p><p>Not all of them will stay small. Some will find a category worth owning and need to grow. That's fine. Capture the first wave and some of it will always stay.</p><p>Then there's education. We spend billions training people for roles that may not exist at the same scale. Entry-level positions have <a href="https://www.cnbc.com/2025/09/07/ai-entry-level-jobs-hiring-careers.html">fallen 35% since 2023</a>. If the bottom rung of the career ladder disappears, how does the next generation learn? The answer might be: they learn to build. Not to fill a seat in someone else's company, but to create their own.</p><p>For most governments, this is a slow-motion crisis they haven't noticed yet. For a few, it's a chance to rebuild the entire economic model.</p><h2>Why this is New Zealand's moment</h2><p>Every few decades, a technology shift reshuffles which countries matter. Estonia built a digital-first economy and now produces more tech companies per capita than almost anywhere. Ireland turned itself into Europe's corporate headquarters. Singapore became the world's most efficient port-state. Different strategies, different advantages, but the same playbook: small countries that saw a shift coming and moved before everyone else.</p><p>New Zealand has the same window right now. And it's closing fast.</p><p>In that world, most of NZ's traditional disadvantages stop mattering. Small domestic market? Irrelevant if your product is global from day one. Far from everywhere? Irrelevant when your team is AI and your customers are online. Limited venture capital? Irrelevant when you don't need to hire 50 people to get started. Timezone? Irrelevant when your employees (AI) work 24/7.</p><p>And some of NZ's advantages suddenly matter a lot more. Quality of life that attracts the kind of people who build these companies. A culture of number-eight-wire problem solving, because there was never anyone else to do it for you. A population small enough that a few thousand of these companies would meaningfully shift the national economy.</p><p>The goal isn't to keep every company here forever. Some will outgrow NZ. Some will need to be closer to their markets or raise capital that doesn't exist here. That's fine. The goal is to be the place where they start. Where the founders live, where the first revenue is earned, where the early wealth is created. Be the launchpad, not the container. If even half of them stay, the economic impact is enormous. And the ones that leave still started here, hired their first people here, and built their networks here.</p><p>The countries that produce the most AI-native founders per capita will punch well above their weight in the next decade. This should be a national priority.</p><h2>What playing offence looks like</h2><p>Credit where it's due: New Zealand's AI strategy is <a href="https://www.bellgully.com/insights/nz-s-ai-strategy-light-touch-regulation-and-opportunities-for-businesses/">explicitly pro-adoption and light-touch on regulation</a>. That's better than most countries. But it's not enough. We were the last OECD country to publish an AI strategy. Neither major party has AI as a serious policy priority heading into an election year. And the strategy itself positions NZ as a "sophisticated adopter" of foreign AI: a consumer, not a creator.</p><p>That framing is half right. We're not going to build foundational AI models. We don't have the compute, the research base, or the billions required to compete with OpenAI, Anthropic, Google, and Microsoft. Being naive about that would waste the window. But adopting AI isn't the end goal. The goal is to build thousands of companies on top of it: founders using AI to create products and services that compete globally. That's where the value is for a country like ours, and that's where the policy focus should be.</p><p>None of this means existing NZ businesses are irrelevant. Of course government should help them adopt AI and reorganise around it. Some will do it well. But that's maintenance. It keeps what we have running. It doesn't create a new export sector or shift the economic model. Helping a construction firm use AI for project management is worth doing. But it's not the same as producing thousands of globally competitive companies that didn't exist before. The growth comes from the new, not the retrofitted.</p><p>Here's what that looks like in practice. Not grants. Not innovation hubs. Not committees that spend two years producing a report. People avoid hard, tedious problems because they're unpleasant, even when that's where the opportunity is. Governments are especially prone to it. The real work is tax reform, visa fast-tracks, education overhaul. Unsexy, politically difficult, and exactly what needs to happen. Real structural advantages that make the best founders in the world want to build here.</p><p>Start with the money. Right now, NZ already has the most important piece: no capital gains tax. A founder who bootstraps an AI-native company, grows it to a meaningful exit, keeps 100% of the proceeds. In the US, that same founder loses up to a quarter federally, and significantly more in high-tax states. In Australia, even with the 50% CGT discount, a founder at the top bracket pays around 23% on the gain. Here, nothing. That's not a minor perk. It's a fundamental reason to build your company in New Zealand rather than anywhere else.</p><p>But here's the uncomfortable question: that same no-CGT currently applies equally to someone who builds a globally competitive company and someone who sits on a house for ten years. We have a country where the most tax-advantaged thing you can do is own property that produces nothing. If we're serious about directing capital toward people and businesses that actually create something, we might need to rethink what we incentivise and what we don't.</p><p>The harder question is how to keep that wealth circulating domestically. When a foreign acquirer buys an NZ company and moves the IP offshore, or when profits flow to overseas shareholders, the country loses the value its founders created. There's no easy answer here, but the principle should be clear: make it overwhelmingly attractive to build, stay, and reinvest in New Zealand. The carrot matters more than the stick.</p><p>Do this right and you get a flywheel. Founders build globally competitive companies. They stay in NZ because the quality of life is better and the tax structure rewards them for being here. They spend locally. They buy houses, hire tradespeople, eat at restaurants, send their kids to local schools. Some angel-invest into the next wave of local founders, and because the capital requirements are so small, a few angel cheques go a long way. More founders means more local spending, more angel capital, more visibility, which attracts more founders. Each turn of the flywheel accelerates the next. But even the founders who don't invest are valuable. A visible cohort of wealthy, successful AI-native founders living in New Zealand is the most powerful recruitment tool the country could ask for.</p><p>Then there's lifestyle, and stability. The people who build AI-native companies aren't optimising for nightlife and networking events. They're optimising for focus, quality of life, and enough certainty to think long-term. In a world where the US is politically fractured, geopolitical alliances are shifting by the month, and founders in many countries can't trust that the rules won't change underneath them, New Zealand offers something increasingly rare: a stable, low-corruption democracy with rule of law, clean air, good schools, and enough space to think clearly. A place where you can build a globally competitive company and still pick your kids up from school. That's not a tourism pitch. It's an economic development strategy.</p><p>Add a fast-track visa for AI-native founders who want to build here. Not a generic entrepreneur visa with a panel of bureaucrats deciding if your business plan is good enough. A simple path: if you're generating global revenue with a small team, you're in. Estonia's e-Residency programme attracted thousands of digital entrepreneurs. New Zealand could go further and offer actual residency to the people building these companies.</p><p>Then fix education. We're working our way towards banning social media for under-16s, following Australia's lead. Maybe that's the right call as a temporary stopgap, but it hides the bigger question we're not asking: how do we teach kids, and adults, to think in a world where AI can do the knowing for you? The old model trained people to retain information and follow processes. That's exactly what AI replaces. The new model needs to teach judgement, problem-framing, and how to build things with AI as a collaborator. Not "learn to code" in the old sense. Give a sixteen-year-old an AI coding tool, a real problem, and a month. Some of them will build something better than what a team of graduates would have produced five years ago. We're not set up for that yet. The countries that produce the next generation of founders will be the ones where kids grow up treating AI as a tool, not a threat. Right now, we're teaching them to be afraid of screens instead.</p><p>None of this solves the harder question: what happens to the people who aren't founders? If the entry-level jobs disappear and not everyone can build their own company, we have a real problem. This is the genuine weakness of the argument, and I don't have a clean answer. Nobody does yet. But here's what I do know: ignoring the shift doesn't protect those people. It just means we're unprepared when it arrives. The countries that move first on the opportunity side will be the ones with the tax revenue, the economic momentum, and the political urgency to figure out the transition. You can't fund a safety net from an economy that missed the window.</p><p>None of those countries won by accident. They made deliberate bets, early, and built the structures to support them. New Zealand has maybe two or three years before the window narrows and the first-mover advantage goes to someone else. We're small enough to move fast. The question is whether we will.</p><p>The world is debating whether AI will take your job. That's yesterday's question. The question that matters is which countries see what's coming and move first. For a small country at the edge of the world, that's always been the only way to win.</p><div><hr></div><p><em>I&#8216;m Ben Lynch. I think about founders, AI, and what happens next from New Zealand. Say hello at <a href="mailto:ben@thinkdorepeat.ai">ben@thinkdorepeat.ai</a>.</em></p><p><em>New here? Start with <a href="https://thinkdorepeat.ai/p/start-here">Start Here</a>. It&#8217;s the quickest way to understand what I&#8217;m building and why I write.</em></p><p><em>If this made you think, forward it to someone who&#8217;d enjoy it.</em></p>]]></content:encoded></item></channel></rss>