I Don't Consume the Internet Raw Anymore
The browser is too raw. ChatGPT is too generic. I built a layer in between.
I actually enjoy the internet again. I hardly ever use it in the traditional sense anymore, but I'm constantly connected to it. More than I have been in years.
For a long time that wasn't true. Finding things online used to be fun. Somewhere along the way that disappeared, and the internet became something to manage rather than explore. I don't think I noticed exactly when it happened. I just stopped looking forward to it.
What changed isn't the internet. It's that I stopped consuming it raw.
I don't go hunting for most things anymore. The system keeps working in the background, 24/7, and sends things back to me in a form I can actually use. It knows roughly how I work, what tends to matter now versus later, and when something should interrupt me immediately versus wait.
Most things I read now get routed through email first. Links, questions, notes, random half-formed thoughts. They go into my own system before they come back to me. In that sense the relationship flipped. The old internet was mostly pull. This is much more push, but on my schedule rather than the internet's.
That sounds like a weird step backwards. It isn't. The browser is too raw now. ChatGPT, Claude, and the likes are too generic.
The browser gives you the full firehose. Search results, headlines, clips, newsletters, links, replies, junk.
The internet used to feel like a library. Now it feels like a casino attached to an industrial waste pipe.
Once it starts feeling like that, you stop trying to browse your way out of the problem.
I started cutting years ago. Social media was the first thing to go. I haven't had a presence on any platform in years. Not a detox, not a break. I just stopped and never went back. The signal-to-noise ratio was so bad that filtering wasn't the answer. The answer was to turn it off entirely.
That taught me something. You don't fix a firehose by building a better bucket. You turn it off and only reconnect what you actually need. The interesting question is what comes back.
Social media didn't come back. But I still had the browser, and I still had the raw internet behind it. And then chatbots arrived, promising to fix the problem.
They didn't. They average the firehose back to you.
If you step back, that is a strange thing to build a workflow around. We trained giant models on vast slices of the internet and other human knowledge, then seemed surprised when the answers came back smoothed into something broadly plausible. Of course they do. The whole system is optimised for synthesis across a huge body of contradictory human output. It's impressive. It's also generic by design.
I think this explains a lot of the disappointment too. People expect magic, ask for something vague, get back something smooth and generic, and then conclude the tool is useless. A lot of the eye-rolling from serious developers about vibe coding comes from exactly this failure mode. If you don't bring context, standards, taste, and the ability to tell when the model is bluffing, you get slop. That isn't a weird edge case. It's the default outcome.
The term "AI slop" is really just a name for what happens when you treat plausible synthesis as if it were judgment. We are still learning how to use these systems properly. The model can do a surprising amount, but only if the human using it knows where the generic output stops being helpful and starts quietly breaking the work.
That's useful up to a point. But generic AI has the same problem the web has. Too much noise, not enough context. It can produce clean sentences about almost anything, but it doesn't know what's relevant to me, what I've already read, what I believe, what work I'm actually doing, or what standards I want applied.
That's the weird part. People are effectively asking a model trained on everybody to answer specifically for them. Sometimes it can. Often it gives you the median internet with better formatting.
What's useful now isn't more access. It's the same thing I learned from cutting social media, applied to everything else. Default to silence. Only reconnect what earns its way back. Then wrap what survives in enough context that it's actually useful to you, not just plausible to anyone.
The models will keep improving, and that matters. But for a lot of real work, they already crossed the threshold of being genuinely useful. The bigger gain now is not squeezing out another jump in raw intelligence. It's adding your own context, memory, and standards so the output becomes useful to you.
That's the shift I keep coming back to. Raw model intelligence is becoming abundant. Context is the scarce layer. Once that happens, the internet and the software built on top of it stop being one-size-fits-all. They start reorganising around the individual user.
I want fewer tabs, fewer interruptions, and better starting points.
So I stopped using both of them raw.
A layer in between
For me, most of it runs through email.
If I read something interesting, I forward it in. If I have a thought, I email it. If I want something held for later, pressure-tested, or connected to other ideas, same thing.
That sounds almost stupidly boring, which is part of why it works. Email still feels almost analogue to me. Nobody expects an instant reply. A thought can sit there for a bit. There's room to think before reacting.
The specific interface matters less than what it enables. Email works for me because it handles links, questions, notes, threads, files, and random half-formed thoughts. It doesn't ask me to adopt a new habit. It gives me a simple way to route information into something that knows my context, without dragging me into somebody else's app or cadence. It's one of my two primary mediums right now because it fits how I already work. The point isn't email. The point is having a way to get things into a system that knows who you are.
That context is the important bit. The same underlying models are available to a lot of people now. The difference is what gets wrapped around them. Two people can start with the same intelligence and end up with completely different systems because the scarce thing is no longer access to the model. It's the context surrounding it.
The system knows what I'm working on, what I've written before, what themes keep recurring, which ideas are live, which opinions I'm still forming, what my blind spots are, what "good" usually looks like in my world, and increasingly what can wait until later. It also knows the business side: what deals are in play, what's been committed to whom, what the priorities are this week, what's behind schedule. That extension feels natural to me. Building and running a software company is still mostly information, decisions, and coordination flowing through the internet. Once I stopped consuming the internet raw, it was obvious the same layer should sit across the rest of the work too. At that point it stops being just a reading filter and starts looking a lot like a company operating system. The model is just the engine. The value comes from everything wrapped around it.
Without that, AI is mostly a parlour trick. Impressive, fast, and often useless.
And yes, there are obvious objections to all this.
It can sound like workflow overengineering. A browser, a bookmarks app, and a chatbot can already get you some of the way there. The trust boundaries are real. And no, this thing is not perfect.
I'm not claiming I found the final interface to the internet. I built a starting point that already works better for me than the defaults. That's enough to keep iterating. The bigger question is why the defaults got bad enough that this became worth building at all.
Why the standard channels broke
The old internet rewarded searching. You could go looking for something and have a decent chance of finding signal. That world is disappearing.
Now the internet is full of summaries of summaries, clipped opinions, SEO sludge, and increasingly machine-written content. Even when the information is technically correct, it's often packaged in a way that destroys judgment. You consume the shape of an idea without doing the work of forming a view.
That degradation is becoming visible in odd ways. In New Zealand, we are now seriously discussing whether under-16s should be kept off social media altogether. Whatever you think of that idea, it points at the same underlying question: when did the internet get bad enough that banning access for kids started sounding like a reasonable policy response?
That's the part that worries me most. Not the noise itself, but how invisible the damage is. A chatbot gives you a fluent answer and you move on. The model doesn't know which source matters more. It doesn't know the hidden context behind your question. It doesn't know that one line in the answer is the line that quietly breaks the whole thing. And because it sounds confident, you often don't notice the weakness until it's too late.
Once the default interfaces get this noisy, you don't need a better feed. You need a filter with memory.
What my system actually does
The closest analogy is a chief of staff.
Not a chatbot. Not an assistant that waits for instructions. A chief of staff who knows what I'm working on, what matters this week, what can wait, and what I'd want to see immediately. Someone who triages, routes, briefs, and shields me from noise so I can focus on the decisions that actually need me.
That's what the system does. It sits between me and the firehose.
A forwarded article doesn't sit in a read-later pile until I forget it. It gets turned into a usable note. A half-baked thought from a walk doesn't disappear into some notes app graveyard. It joins other ideas that might eventually become a post. A question doesn't get answered from scratch every time. It gets answered against the accumulated context of what I'm already doing and how I tend to think. Something non-urgent doesn't interrupt me just because it arrived. It waits, gets filtered, and comes back later as part of a summary.
And like any good chief of staff, it delegates. When something needs deeper work, it gets routed to a specialist. Writing has its own context, its own memory of my voice, its own rules. Research has different tools and standards. Inbox triage follows different logic than idea development. Business operations, client work, finance, and planning each have their own brief. When I find a gap, I spin up a new one.
That's how the system grows. Not by getting bigger and doing everything at once, but by adding focused capability where it's actually needed. The chief of staff decides what goes where. The departments do the work. I make the calls.
It keeps working in the background too. While I'm asleep, out walking the dog, or in the middle of something else, it's still sorting, refining, and connecting things. Good systems know when to interrupt and when to leave you alone. They fit around your schedule. They don't demand you live inside theirs.
There is an obvious trust boundary here. You can't just pour your whole life into a black box and hope for the best. What goes in, what stays out, what gets to act, what only gets to brief you. If you get that wrong, the whole thing becomes creepy fast.
And it does get things wrong. I've had it quietly filter out something I needed because it didn't match the pattern of what I usually care about. I've had it surface connections between ideas that felt insightful but were actually just reinforcing what I already believed. The more context it accumulates, the better it gets at telling me what I want to hear.
That's not a temporary flaw. It's a permanent feature of any system like this. A filter trained on your own patterns will always tend toward confirmation. The question isn't whether that happens. It's whether you notice when it does and override it. That's the job that never gets automated. Not because the technology isn't good enough, but because the whole point of the system is that a human stays in the loop making the calls.
Once an agent has memory, rules, and initiative, the trust question stops being abstract. You have to decide what it is allowed to know, what it is allowed to do, and where the human line stays.
That's why this has to be iterative. You don't get the whole system right on day one. You tighten the boundary, see what was useful, see what was noisy, and keep refining.
It also gets calibrated through use. Every correction, override, rewrite, and ignored suggestion teaches it a bit more about what I actually care about, what I don't, and where its instincts are wrong. That doesn't remove the need for judgment. It just means the filter compounds over time.
The output is still not final. That's the point. It gets me to a better starting point.
Once you think about it as routing instead of browsing, the interesting question becomes which interfaces actually survive.
The interfaces that actually survive
The interesting part is not that I built some software around AI. It's which interfaces ended up mattering, and the principle behind why.
Everyone wants the future to arrive as a shiny new app. In practice, the winning interfaces are the ones that disappear into behaviour you already have. Not the ones that ask you to change your habits. The ones that meet you where you are.
For me, right now, that is email and the terminal.
Email handles the routing. It's asynchronous. It works from anywhere. It doesn't demand I sit there and respond in real time. The terminal handles the focused work. When I need to build something, fix something, or think through a problem with real depth, I open a terminal with the right specialist connected to the same context. Between them, I rarely need a browser at all.
If you'd told me a year ago that my entire working setup would collapse to two interfaces, I wouldn't have believed it. I run a company through this system. Writing, research, operations, client work, planning. The stack didn't get more complex. It got radically simpler.
That may not be the final shape forever. It's just the shape that currently fits how I already work.
The pattern matters more than the tools. The useful AI layer is the one that slots into how you already work, carrying enough context about you and your work to actually filter rather than flood. The moment it asks you to change how you operate to match the tool, it's already lost.
The interface is just the entry point. The real value sits underneath, in how well the system is shaped around the person using it. That's the real principle. Not "use email." Not "learn the terminal." Find the interface you'd use without thinking and make it the entry point to a system that knows your context.
This is not just about me
What I've described is a personal system, but it points at something bigger. I wrote recently about New Zealand's three-year window, the argument that AI-native companies built clean from day one will beat legacy companies trying to retrofit AI into old structures. The same logic is showing up here at the interface layer.
If one person can collapse their working setup to two interfaces they already use, the question becomes: what happens to all the software that used to sit in between? The market is already starting to ask. Software stock valuations have been hammered. Companies are cutting SaaS licences and replacing them with AI tools. The direction is early and uneven, but it's real.
Underneath that is a simpler shift. Once the base intelligence is good enough, software stops competing mainly on access to capability and starts competing on how well it fits a specific user. Their context. Their workflow. Their standards. Their timing.
Knowledge used to compound mainly through acquisition. The person who knew more than everyone else had a durable edge because information itself was scarcer and the asymmetry mattered more. That still matters, but less than it used to. The person who can tell what matters, what doesn't, and what should be ignored entirely is often in the stronger position.
When knowledge is abundant and retrieval is cheap, the advantage shifts toward filtration.
I don't think the interesting part is which apps survive and which don't. The interesting part is what becomes scarce when the interface layer gets thinner. And I think the answer is judgment. More specifically, trusted judgment.
The judgment still has to stay with me
None of this means I outsourced thinking.
If anything, it made the boundary clearer.
The system is very good at helping me organise, retrieve, connect, summarise, and draft. It is very good at turning messy inputs into usable material and giving me something to react to across work, reading, and day-to-day decisions.
It is not good at being me.
I see this most clearly in writing because the failure mode is obvious. I can feed a set of notes into a model and get back a structurally sound draft. The sentences work. The flow is reasonable. The argument seems balanced. And sometimes that is exactly how I know it's wrong.
The model has a bias toward sounding plausible. It wants to smooth the edge off a real opinion, flatten lived experience into general advice, and produce something that nobody could strongly object to. That's great if your goal is harmless sludge. It's terrible if your goal is saying something true.
The useful work is noticing where the output became generic, where it lost the tension, where it stopped sounding like an actual person and started sounding like an educated average. That's the part I still have to do. It will always be the part I have to do.
Some people hear that and think it's a limitation. I think it's the design. A system that needs your judgment at the centre isn't half-finished. It's correctly built. The moment you take the human out of this loop, you're back to consuming averaged information with extra steps.
What this seems to imply
I think the conclusion is pretty simple.
The browser is too raw. ChatGPT and Claude are too generic. A lot of the software sitting between them is about to look like unnecessary scaffolding.
The next layer is not one better universal app. It's context wrapped around the individual user.
You don't need to be a founder or an engineer to feel this. Anyone with too much inbound information and not enough time eventually wants the same thing: less noise, more continuity, and fewer low-value decisions.
But I should be honest about who can act on it today. What I've described requires technical skill to build. I wrote the code, configured the agents, designed the routing. Most people can't do that yet. The desire for better filtration is universal. The ability to build your own is not.
That gap matters more than the technology itself. If good filtering stays something only technical people can build for themselves, it becomes another way the information-rich get richer. The people who most need help cutting through noise are the least equipped to build the tools to do it. That's not a footnote. It's probably the most important question in this whole space: who gets access to trusted filtration, and who stays in the firehose?
That's the problem I keep coming back to. But I think the shape of the solution is right even if my specific implementation isn't for everyone: raw information on one side, human judgment on the other, and a context-rich system in the middle doing the filtering. The interface into that system should be whatever you already use without thinking. For me it's email. For you it might be something completely different.
I don't think the future is everyone using the same internet through the same apps and the same chatbots. I think it's each person interacting through a layer shaped around their own work, standards, schedule, and judgment.
And I don't think the human in that loop is a temporary flaw before the system becomes fully autonomous. For anything important, it's the point.
That's already how I work now.
I don't want more information. I want better filtration.
I don't want AI to tell me what to think. I want it to help me hold more context without drowning in it.
And I don't think I'm going back to consuming the internet raw.
I'm Ben Lynch. I think about founders, AI, and what happens next from New Zealand. Say hello at ben@thinkdorepeat.ai.
New here? Start with Start Here. It's the quickest way to understand what I'm building and why I write.
If this made you think of someone who's still drowning in the firehose, send it to them.

