🤖 Its time to talk to your friends about AI

There’s something almost polite about how some AI labs are warning us. They’re doing podcasts and writing blog posts. It is the most civilized fire alarm in history

PROMPTED:

Staff are literally quitting because they’re freaked out by what they’re seeing.

Welcome to the Prompted edition of Brainfood. If you want to subscribe to this edition directly (which will leave the Brainfood feed in a few weeks), click here.

The people at the AI labs are trying to warn you.

The exponential takeoff scenario is here. And their job is being displaced. They don’t fully know what the consequences will be for you, for society and the economy. But there is one great metaphor to understand their sentiment. 

In a viral piece from last week Matt Shumer drew the parallel to the COVID pandemic which seemed far away in January but by April, countries were in full scale lockdowns. The whiplash from “nah” to “holy crap” is palpable. And it’s coming to AI.

Back in June 2025, AI 2027, predicted that by 2026 AI will compress a decade of research into a single year by becoming good at software. What makes this striking is the paper was regarded as wildly over optimistic at the time.

In the past two weeks, OpenAI and Anthropic both released new models, on the same day, and what made them unique?

âťť

GPT-5.3-Codex is our first model that was instrumental in creating itself.

The founder of Anthropic said in a recent podcast with Dwarkesh Patel: "The most surprising thing is the lack of public recognition of how close we are to the end of the exponential."

There’s something almost polite about how some AI labs are warning us. They’re doing podcasts and writing blog posts. It is the most civilized fire alarm in history.

Some have even gone quiet. OpenAI deleted the word “safely” from its mission statement. Disbanded its mission alignment team — the third safety-focused team it’s dissolved in two years. The team leader got a new title: “Chief Futurist.” Make of that what you will.

Musk, say what you will about Musk, is at least being honest about the trajectory.

The exponential takeoff scenario looks more real than ever. The researchers are saying it.

It’s happening.

How is this happening?

AI labs are using AI to build the next AI model faster.

Matt Shumer explains:

The AI labs made a deliberate choice. They focused on making AI great at writing code first… because building AI requires a lot of code. If AI can write that code, it can help build the next version of itself. A smarter version, which writes better code, which builds an even smarter version. Making AI great at coding was the strategy that unlocks everything else.

Matt Shumer

Dario Amodei, the CEO of Anthropic, says AI is now writing "much of the code." And we’re within 1 to 2 years of "a point where the current generation of AI autonomously builds the next."

At their town hall, Elon and Grok gave an overview of how the Grok code team is already using current models to train the next generation, creating an "exponential takeoff." (There’s that word again.) 

The effective workforces of AI labs will 10x and 100x over the course of 2026 as AI begins to do most of the AI improvement research. 

And to everyone that says models can’t do novel ideas. Well. They just did:

GPT-5.2 derived a new result in theoretical physics.

Until recently, models weren't coming up with novel ideas, but they are already an army of junior researchers that can execute experiments. And if today 800 engineers at Anthropic are achieving a roughly 4x efficiency improvement year-over-year by grinding through those experiments, what happens with 8,000? 80,000? 

And then what happens when those models are doing the actual breakthroughs and new idea creation?

The debate should not be about whether this loop will occur, but how it does and what its implications are.

Writing Code is becoming rare.

Job displacement is starting in AI research, but task displacement in coding is the new norm in big tech. Spotify developers haven’t written a single line of code since December.

So what are they doing?

When engineers have stopped coding, and started managing, they’ve moved up a layer in the company. They’re deciding what to build, they’re building new products, and faster. That’s good for productivity, but it also fundamentally changes the skills you need to do the job well.

Elon talked about the wild “Macro Hard” project at the same town hall I mentioned earlier. The goal is to have a model that can run an entire company of digital output. So if you do knowledge work, Macro Hard doesn’t replace you, it replaces you and everyone you work with. 

That’s an entirely new category of software. 

You used to hire a department via an API. Now you can run the department with AI.

And what’s wild is it’s starting to exhibit taste and judgement, the high ground we thought we’d have for a while. To quote Shumer again:

The most recent AI models make decisions that feel like judgment. They show something that looked like taste: an intuitive sense of what the right call was, not just the technically correct one.

Matt Shumer

Will this come for other jobs too?

If AI has taste, what do we need writers for?

In my own experience, the quote from Matt Schumer resonates. I write a lot, and it used to be that an essay would take two days of effort to do well. I always started out knowing roughly what I wanted to say, but the research, and organizing the ideas, and making it readable, that was an iterative, agonizing process.

And I can honestly say that’s shifted to the point where now I can open a Google Doc, dump in some of my thoughts and sentences, a summary of a YouTube video, some quote snippets like the ones in this article, and give it to Gemini to re-structure and then Claude to add finesse.

What comes out the other side is usually 80% done. And what took 2 days now takes 2 to 4 hours. In fact, that’s the very reason I do this AI edition of the newsletter.

(This essay worked that way. Here’s the original Google Doc for reference. And what you’re reading now was the output of 5 or 6 back-and-forth prompts, a manual edit, including this sentence, and then another quick review with Opus 4.6 before publishing.)

So am I worried about my job? 

No because:

  • Technology diffusion takes time. 99% of consumers and enterprises won’t use the latest AI tools. So being on the forward end of the curve is the best defense.

  • Brand and trust. Over decades in industry, people want Simon’s take, not AI’s take. And while AI can get there sooner, it doesn’t always get to Simon’s take, even if it creates a very good one.

  • I’m not limited by imagination of things I could do and make, I’m inundated and frustrated by the inability to do them.

  • I like to touch grass. Play with the kids. Be IRL. And if AI takes off, and creates some kind of economic abundance, I can find meaning in just about anything. Curiosity and movement are wonderful things.

Yes because:

  • I buy the argument that anything AI can’t do today, it will do some day. It could create a better Simon, based on all of the data on the internet. And put me out of a job.

  • Economic collapse scenarios can impact anyone. Especially those whose entire job is behind a desk.

There’s a meme: “If you work behind a desk, AI can replace you.”

But I don’t think it does. It doesn’t replace your intentions, your desires, your wants. Your ability to abstract. No technology does. Fire and agriculture changed our jobs from finding the next meal to building the next Kingdom. The Industrial Revolution changed our jobs. The printing press did. The computer did. 

The difference is that this time it’s all happening so much faster. 

And like with any exponential. Unless you pay attention, you may be shit out of luck


(Remember the panic buying of toilet paper?)

So what should you do?

It’s Hunger Games. Be a winner.

Or as Matt says

This might be the most important year of your career. Work accordingly. — Here’s a simple commitment that will put you ahead of almost everyone: spend one hour a day experimenting with AI.

Matt Schumer

One hour a day. That’s the prescription. Almost absurdly modest for something this big.

But he’s right. The gap between people who use these tools and people who don’t is already enormous and widening daily. Not because the tools are magic, but because the people using them are compounding their skills while everyone else stands still.

The first single-person unicorn company is born.

OpenAI has acquired OpenClaw. The personal agent that has taken the internet by storm. 

OpenClaw is an open source AI agent that manages your email, browses the web, and handles tasks on your behalf. Think of it as a personal assistant that actually works.

From CNBC:

Sam Altman said in a post on X that OpenClaw creator Peter Steinberger is joining OpenAI and that the open source AI tool will "live in a foundation" inside the company.

AI agents such as OpenClaw have surged in popularity recently for their ability to automate tasks, including managing email and using online services.

"We expect this will quickly become core to our product offerings," Altman wrote.

CNBC

In the process we may have witnessed the creation of the first one-person unicorn exit. Ever.

One person. One tool. A billion-dollar outcome. That's the world we're in now. And it's still early.

Dario thinks we’ll have AI systems matching Nobel Prize-level intellect by late 2026 or early 2027. He envisions a “country of geniuses in a data center” within one to three years.

You can dismiss that. You can also remember that the people who dismissed COVID in January were panic-buying toilet paper in March.

Talk to your friends about AI. 

The people who know are waving their arms. 

The rest of us should at least look up.

ST