Welcome to Fintech Brainfood, the weekly deep dive into Fintech news, events, and analysis. You can subscribe by hitting the button below, and you can get in touch by hitting reply to the email (or subscribing then replying)
Weekly Rant 📣
🤖 How Ramp Cracked Enterprise AI.
At Ramp, they hit 99% adoption of AI tools across the company.
And then noticed something concerning: most people were stuck.
People had no idea how to improve their setup. And lets face it, to become anything more than a basic user of AI is confusing. There are terminal windows, npm installs, and MCP configurations which are a whole new form of gobbledegook. The few who pushed through had wildly different setups, with no way to share what they'd learned.
As Seb Goddijn put it:
"We'd created urgency without providing enough infrastructure, and it limited the true upside of AI to people who already knew how to configure it."
This made me think about every company whose earnings announcement had more mentions of AI than I've had arguments about stablecoin yield. They're all talking about AI adoption, but they're unable to show the results and benefits, because their most important asset, their people, can't get to grips with this strange, alien, obtuse, still-forming operating mode.
This is the state of AI adoption in 2026. Most people are still stuck at co-pilots, the odd image generation, and slop outputs from LLMs. Everyone has the tools. Almost nobody has figured out how to rewire the way they work around them.
And the gap between those who have and those who haven't is becoming a chasm.
One chart show you just how wide.
The data is screaming: AI delivers revenue jumps.
Ramp data shows companies spending more on AI grow 6 to 10x faster than those that don't. And while yes, a lot of those index towards growth companies, it's also: A roofing company in Texas, a window installer in Utah, and a construction firm in Florida that grew 65%.

This study found firms that used AI 44% more had 1.9x higher revenue and needed 39% less capital.
But we know adoption alone isn’t enough.
We need to harness the AI to get its true benefits.
The gap is accelerating and most companies don't feel it yet. Because they're doing their day job, the default, trying to win the old way and sprinkle some ChatGPT on top, and giving all of their engineers Cursor in the hopes that writing code faster will lead to more revenue growth and AI relevance.
It won't.
The evidence says: spend on AI, grow faster. The question is why most companies spend on AI and don't grow faster. Why 99% adoption still means everyone is stuck.
The answer is your default operating model.
The Playbook for Enterprise AI Adoption
Ramp is literally out here showing you how to run your company's operating model with AI.
This incredible playbook written by Ramp's Seb Goddijn should be lazered to the inside of your eyeballs if you're trying to figure out how to get your team to do more with AI. If you're asking McKinsey to write you an "AI Operating Model," you're doing it wrong. If you're expecting the whole company to use AI more, without creating the framework for showing them how, you're setting them up to fail. As Seb says:
"The models are already exceptional, but most people use them like driving a Ferrari with the handbrake on."
Ramp built "Glass," an internal AI productivity suite, around three principles. Each one sounds simple. Each one is deceptively hard.
There's so much here, I'm going to dedicate some word count to it because, damnit, it's pure gold. It's platinum. It's diamond. It's all of the enterprise support levels in one.
1. Onboard everyone.
With Glass, you onboard with your Okta (enterprise login), and then your AI agent is wired in to every other enterprise tool. So you don't have to fiddle with MCPs, plugins, APIs or anything else.
When a sales rep asks Glass to pull context from a Gong call, enrich it with Salesforce data, and draft a follow-up — it works, because everything is already connected. Some companies have this flow available. But the one-click install nature of it is powerful.
Glass also turns your laptop into a server so you can schedule automations. For example a finance team lead pulls yesterday's spend anomalies every morning at 8am. An ops team built a Slack-native assistant that answers vendor policy questions by pulling from Notion and Snowflake — in an afternoon.
That's onboarding.
2. Build re-usable skills and make them available to everyone.
The failure mode wasn't that people couldn't figure things out. It was that everyone had to figure things out alone.
Skills are the solution here. Power users build a skill file, which is like the scene from the matrix but for AI's. "I know KungFu" becomes "I know how to create UI outputs in the Ramp house style" or, "I know how to analyze sales calls transcripts as well as the best sales person / AI user in the company."
Ramp then built a marketplace of over 350 reusable skills. This simple trick means anyone in the company can now use AI as well as the best person at using AI for that one thing did that one time. Except. Always.
They also built a Sensei — an AI guide that looks at which tools you've connected, your role, and what you're working on, and recommends the five skills most likely to matter on day one.
I mean, I want this.
3. Per-user persistent memory.
ChatGPT has "memory," but that really manifests as it randomly recalls that you looked into Peptides a month ago while you're asking a question about taxes.
Glass remembers you through a full memory system built from your authenticated connections that gives every session context on who you work with, your active projects, relevant Slack channels, Notion docs, Linear tickets. A synthesis pipeline runs every 24 hours, mining previous sessions and connected tools for updates.
This memory system, by itself, is one of the hardest things to get working by yourself. The fact that the memory is just working around your day to day AI usage is nirvana.
Why they built it in-house:
"Internal productivity is a moat. The companies that make every employee effective with AI will move faster, serve customers better, and compound advantages their competitors cannot match. That makes internal AI infrastructure part of your moat, and you do not hand your moat to a vendor."
Every growth company founder or CxO I speak to is trying to figure out how they get their arms around their company's AI adoption, and I think Ramp just gave you the answer. Building your own "Glass" might be quite a hard endeavor, but it's also one you may have to take on.
Can any other company do this? Ramp has a rare combination: deep engineering culture, a product that already required this kind of tooling. But guess what? A small team vibe coded glass! You can do this. You can.
Every company's internal tooling, workflows and security is idiosyncratic. I doubt a vendor solution for a "company harness" will solve your needs, unless that harness is some wildly customized thing.
If you want more on this, Seb has agreed to speak at Fintech Nerdcon this year on how they built Glass. So grab a ticket sharpish to make sure you don't miss that one. I'll be making time.
Companies need to break out of their default.
The founder of Shopify posted this tweet:
This typifies how big companies will feel credible spending billions upgrading software that generates a CSV file with 450k rows in it twice a month. A statement so ridiculous I want you to say it out loud just to hear it. And yet. It gets approved. Consistently. Everywhere.
Companies and people take on weird habits as they age. Their brains (and the company's internal culture) build a default mode network. A way things work model. The sun rises, it falls, summer follows spring. It's perfectly fine to spend billions on software for CSV files. That kind of thing. These brain links become deep channels that grow less open to new ideas and new ways of doing things.
There's no garbage collection.
In orgs this shows up as governance forums, operating models, and software stacks that over time become their default mode network. Budget cycles, audits, and testing look almost identical today as they would have done 10 or even 20 years ago in most enterprises. All that's changed is the version number on their Microsoft suite.
Why? Credibility is why. Results you can see is why.
A COO of a growth company, or CEO of a bank division, can probably get better short-term results by cost-cutting and hiring more enterprise sales staff than fundamentally rewriting the company around AI, which is hard and has unknown outcomes. The safer path is to get the results you can see, not the unknown path.
Their bonus, their board, their next promotion all reward the visible, measurable, short-term win that we all secretly, deep down, kind of know is lame.
Elon's solution is to delete a process or step, and if you're not adding back 10%, you didn't delete enough. That's disruptive to getting things done. It's hard to do while also doing the day job.
A child doesn't have these constraints.
To them anything is possible, and has a sense of wonder and awe to it. Everything is play.
Young companies, too, don't have these constraints. But what they have now is superpowers. The ability to build, the ability to grow revenue faster than at any time in history, and to do it with far fewer people.
You might not be feeling the crack in the ice. You can still get credibility in your organizational hierarchy from doing things the way you always did.
But the ice is cracking.
Bring Prototypes not Decks
Jack Dorsey just cut 40% of Block. When asked why, he said:
if we rebuilt this company today, would it look like this? The answer was uniformly no.
A lot of CEO’s have sent the memo “AI or Die” (heck, I even heard the IMF sent that to their team). But how many are changing how they work?
But at Block, two months later, every meeting had changed. Nobody brings a slide deck anymore. They bring a working prototype. Built that morning. With real data. And they modify it in the room, in real time, together.
The memo is dead. The deck is dead. The prototype is the meeting.
This is what breaking your default mode network looks like in practice. It's showing up Monday morning with something you built over the weekend using AI, and everyone in the room went oh, we can do that?
That's play.
You know what is impossible to compete with? Someone who's having fun.
And AI can be so fun, when you're shipping things you never thought possible.
AI regularly makes me feel like a kid. Sometimes lost in the joy of doing something new, marveling at pure magic — creating a Fintech Nerdcon app, or a second brain, or it just DOING my expenses across 5 email accounts. And also like a kid because the next day it didn't work, and it's frustrating that you can never quite get all the way with it without mom and dad (an actual engineer), helping.
The most important, grown-up, enterprise skill of the post-AI era is to indulge your playful side. And immediately, I'm thinking about every grey-suited executive inside a large organization who instantly dismissed everything I've said up until this point. Because "play" doesn't feel compatible with spreadsheets, and seriousness and revenue, and budgets and blah.
But play is simply the most efficient way to take on new learning. That’s why it evolved.
That doesn't mean throw out the experience you have. You've decades of taste and judgment (hopefully). AI gives you the means to act on that at machine speed.
But only if you make time for it.
Play looks like midday to 4pm on a Friday of no meetings and rewriting your default mode network. Doing something NEW that changes your workflow and how you do things. It could be co-work, it could be a Claude project, it could be just trying it with AI first.
The risk of doing nothing is clear. You go the way of the rest of SaaS — eroding into irrelevance, low multiple, low growth. Credible. Existing. Not growing.
The upside is becoming the fastest-growing company in your category.
This is an incredible time to be alive. The evidence is here. Companies are literally publishing their playbooks. All you have to do is pay attention.
Attention is all you need, after all.
ST.
PS. I didn’t speak to anyone at Ramp for this piece (other than to DM Seb about giving a talk at Nerdcon, yes, tickets are available now). It’s just kind of nuts how much Ramp is writing and giving away as X articles right now. It’s clearly for recruiting. But if you’re an operator in tech, there’s a ton to learn.
Want to read the test of Brainfood? You’re going to need to read this on the web. The Things to Know this week are worth your time!
4 Fintech Companies 💸
1. Prism Layer - Group Risk and Control (GRC) for the AI era
Prism Layer connects critical risk info trapped across dozens of unconnected systems, to save teams from spending weeks stitching it together. Prism Layer encodes your risk appetite, thresholds, and frameworks as executable logic, pulls real signals from your systems, and produces structured, defensible outputs.
🧠 Every Fintech company and bank has to have systems and processes to manage risk. The reality of today's GRC stack manages artifacts (policies, audit logs, reports), but nobody actually applies the ERM framework day to day. The difference between having a risk policy PDF and having a risk policy that runs. The team is ex-Block, so sits at a nice intersection of having done this at scale, regulated but with modern tech. And as “Fintech” companies become AI-pilled, I wonder how many companies like this will spin out and build interesting companies for the rest of the industry. Especially as headcount is reduced.
2. Numos AI - The AI finance teammate for accounting and FP&A
Numos automates the accounting and FP&A grind like month end close close, AP/AR, reconciliations, variance analysis. Every output is auditable and shows its reasoning so the CFO can trace the logic before signing off.
🧠 I’ve seen maybe 15 of these, what matters will be traction. Seeing so many companies doing the same thing tells you something. That the finance teams pain is real. They want to be doing strategic work not admin. Whether this is a company or a feature I remain to be convinced. Surely, every expense management / B2B spend platform like Ramp or Brex will just do this soon?
3. Gangkhar - AI-native embedded insurance
Gangkhar provides infrastructure for embedding insurance products into digital platforms across Latin America. Carriers and digital companies can configure, launch, and optimize protection products through its Sherpa+ platform, which handles onboarding, pricing, underwriting, and claims with “real-time AI optimization.”
🧠 Latin America's insurance penetration is 3.1% of GDP — half the global average. 50% of banked individuals are uninsured. That's a massive gap and embedded is probably the right distribution model to close it. Embedded insurance has been "about to break out" for 5 years now. The bottleneck was never technology, it's carrier willingness to underwrite populations they've never priced before. Where’s the AI? Well it’s doing the actuarial work, and the question is will carriers be ok with that? Facing both carriers and digital platforms is a double sided market challenge.
4. finperks - The Stripe for prepaid
Finperks provides API infrastructure for the prepaid programs to enable features like cashback, rewards, employee benefits, loyalty point conversion, crypto off-ramps, and agentic prepaid wallets. Already live with 1,000+ brands across 30 markets including Zalando and Flix.
🧠There isn’t really a canonical “Stripe for Prepaid.” Prepaid is a $4 trillion market by 2035 and it’s still very hard to implement in practice. The interesting bit is "agentic prepaid wallets" — if agents need spending capability, prepaid rails are a natural fit (no credit underwriting, instant issuance, controlled limits).
Things to know 👀
Amex says the developer focused “Kit will enable intent-driven agentic transactions on American Express’ trusted network, backed by Amex Agent Purchase Protection™, a first-of-its-kind commitment to protect registered AI agent purchases.”
🧠 Amex is providing protection even if your agent makes a mistake. This liability-forward positioning is really unique. While other networks have chargebacks, liability may vary. Target says consumers are liable for their agents. The card networks are focussing on technical standards.
🧠 Amex is quite different in having more of a closed loop. It is often the issuer and acquirer. They’re not dependent on a chain. They also charge more interchange, and tend to see consumer protection as a major brand win for them.
🧠 Agentic commerce is all over the place. Walmart closed its checkout in LLM because of poor conversion. Amazon’s Rufus is succeeding, but blocks 3rd party LLMs. Amex's bet: we have all sides of the network, we can make this experience seamless. This will take time to settle down.
Revolut published an academic paper about its foundation model called PRAGMA. Trained on 26M users, and co-authored with NVIDIA. No other neobank has published anything close.
🧠 The results vs their base ML models are staggering!
Credit scoring up 130% vs their base ML model
Fraud recall up 65% vs their base ML model
Marketing engagement prediction up 79% vs the base model
Product recommendation up 40%
This one foundation model replaces six separate ML pipelines.
🧠 Critically, this is not a large language model, nor a company packaging an LLM to do internal workflows. Instead they took customer event data (like logins, payments, clicks) and created a set of embeddings of that behavior over time.
🧠 This isn't a panacea, however. For AML, the model was -47% vs their production system. And this tracks because AML is a network problem not a you problem. What matters is who you transact with, not your events and what you've done in the past. PRAGMA was trained on individual users, not network connections. (although, that makes me wonder if network behavior itself would be another interesting candidate for a transformer model).
🧠 This is the first time I've seen true data science and evidence on the effectiveness of a foundation model for banking. I’m SHOCKED we haven’t seen more of this.
🧠 There should be an arms race for the best foundation models for banking. The bank with the best foundation model gets a structural edge across every decision it makes. (I suspect this is happening, and people just aren’t publishing findings)
🥊 Quick hit: Slash Money raised $100m at $1.4bn valuation. A company with $250m ARR and 60 (yes, sixty) employees. I wrote up my full thoughts on Social HERE.
Good Reads 📚
DeFi lending protocols like Morpho and Aave are now getting mature enough to warrant closer inspection as a genuine alternative to traditional lending. “This one is for the nerds.” People are beginning to treat DeFi yield as “savings accounts” or a private credit fund. But Luca argues the risk of what they’re buying is not matched by the returns they generate. And he makes that argument with maths.
🧠 If you’re paying attention to the stablecoin yield debate OR the private credit sector the point Luca makes here is worth you time. Here’s Claude’s full breakdown for those of you who are none math nerds.
🧠 TL;DR. Most consumers don’t know what is driving their DeFi Yield or the risk they’re taking. Regulation limits other sources of yield under GENIUS. Nothing bad has happened (yet). And Morpho token printing is subsiziding both sides currently, reducing the risk cost.
Tweets of the week 🕊
That's all, folks. 👋
Remember, if you're enjoying this content, please do tell all your fintech friends to check it out and hit the subscribe button :)
Want more? I also run the Tokenized podcast and newsletter.
(1) All content and views expressed here are the authors' personal opinions and do not reflect the views of any of their employers or employees.
(2) All companies or assets mentioned by the author in which the author has a personal and/or financial interest are denoted with a *. None of the above constitutes investment advice, and you should seek independent advice before making any investment decisions.
(3) Any companies mentioned are top of mind and used for illustrative purposes only.
(4) A team of researchers has not rigorously fact-checked this. Please don't take it as gospel—strong opinions weakly held
(5) Citations may be missing, and I’ve done my best to cite, but I will always aim to update and correct the live version where possible. If I cited you and got the referencing wrong, please reach out

