AI AGENTS
Your AI Agents Don't Need a Better Model. They Need a Company Brain.
AI agents stall the moment they touch real work — not because the models aren't smart enough, but because every company runs on knowledge no one wrote down. The case for a company brain.
The frustrating thing about getting AI agents to work in real scenarios isn't that the AI is bad. It's that the company runs on knowledge nobody wrote down.
The models got good. The agents still don't quite work.
If you've tried to deploy one inside a real company, you already know why. The model isn't confused about how to think. It's confused about how your company thinks. It doesn't know which onboarding configurations have caused churn for past enterprise customers. It doesn't know that contracts over $50k always loop in the COO before they go out. It doesn't know that when the payment processor flakes, you don't page the on-call engineer. You ping Maria, because Maria fixed it last time.
The bottleneck moved. It's no longer capability. It's context.
Tom Blomfield wrote a short Y Combinator RFS post last week that names this clearly. He calls the missing piece the company brain. We think he's right, and we think the framing matters more than people are picking up on.
What Blomfield actually said
The RFS is three paragraphs. The core argument is this: every company has critical know-how scattered across people's heads, old emails, Slack threads, support tickets, and databases. Humans get by because they vaguely remember where things live. AI agents can't operate that way. So we need a new primitive: a system that pulls knowledge out of the fragments, structures it, keeps it current, and turns it into "an executable skills file for AI."
He's careful to say what this isn't. It isn't company-wide search. It isn't a chatbot over your docs. It isn't a knowledge base. It's "a living map of how a company works."
That distinction is the whole game. And almost everyone writing about this is going to miss it.
Why this isn't RAG (and why the distinction matters)
When most teams hear "give the AI access to our knowledge," they reach for retrieval-augmented generation. Index the docs. Embed the tickets. Stuff the relevant passages into the context window. Done.
RAG is useful. It is not a company brain.
Here's the difference: RAG retrieves passages. A company brain encodes procedures.
A passage tells you what someone wrote down. A procedure tells you what to do. A passage might mention that refunds over $200 require manager approval. A procedure knows that this rule has two exceptions, that the manager's approval threshold was raised last quarter, that the rule doesn't apply to enterprise customers, and that the agent should escalate rather than guess when it sees a case it doesn't recognize.
You can't get there by retrieving better passages. You get there by building a different kind of object.
| What it stores | What it outputs | How it stays current | What an agent can do with it | |
|---|---|---|---|---|
| Search | Documents | Links | Crawl | Show the human where to look |
| RAG | Passage embeddings | Relevant text | Re-index | Generate a grounded answer |
| Chatbot over docs | Passages + memory | Conversation | Manual | Answer FAQs |
| Company brain | Entities, procedures, policies, provenance | Executable skills | Continuous capture | Take action within known constraints |
The right way to think about it: search is for humans who already know what they're looking for, RAG is for AI that needs to sound informed, and a company brain is for AI that needs to do the job.
Where company knowledge actually hides
Before you can build a company brain, you have to admit something uncomfortable: most of what makes your company work isn't written down anywhere a model can read.
It lives in:
- The senior support rep's head, where the unwritten refund policy lives
- Slack DMs between an engineer and the person who owns billing
- The comments on a closed Jira ticket from 2023
- The CRM notes a salesperson scribbled after a discovery call
- An old Notion page that everyone half-trusts and nobody has updated since the org redesign
- A spreadsheet that's secretly a system of record
- The "we always do it this way" rules that nobody can cite the origin of
Every company has some of this, to a degree. The good ones have less of it. None have zero. And the painful truth is that the more sophisticated your company gets, the more of this tribal knowledge accumulates, because the rules are getting subtler, faster than anyone can document them.
This is the raw material. The job of a company brain is to turn it into something an agent can actually use.
Anatomy of a company brain
A useful company brain has five layers. Skip any of them and the agents you build on top will misbehave in predictable ways.
Entities. The nouns of your business: customers, orders, SKUs, accounts, incidents, deals. These already exist in your databases. The brain's job is to know what each one means in your specific context, not just what's in the schema.
Procedures. The verbs. How a customer gets onboarded. How pricing exceptions get approved. How an incident gets triaged. These are the patterns that humans currently execute from memory. Encoding them is the hard part, and the part that most teams skip and regret.
Policies. The constraints. What an agent is allowed to do without escalation. What thresholds trigger human review. What it must never do. Without this layer, agents either get over-cautious and useless or over-confident and dangerous.
Provenance. Where each fact came from and when it was last verified. Without provenance, the brain rots. With it, you can detect when the world has drifted out from under your encoded knowledge.
Skills. Executable units the agent can actually invoke. These are the verbs from layer two, but expressed as code or tool calls, not prose. This is what Blomfield means by "executable skills file."
If you only build the first two layers, you have a wiki. If you build the first four, you have a knowledge graph. If you build all five, you have a company brain.
What this looks like in practice
Take customer onboarding at a B2B company. It's relevant for any product where new customers need configuration, integration, or training before they reach value. Today, when an enterprise customer signs, the implementation team kicks off a plan that's some combination of a runbook, a Confluence page, and the lived experience of whoever happens to run the process. The plan that actually gets executed depends heavily on who picks up the account. One implementation lead remembers that customers in regulated industries always need a specific compliance setup. Another knows the integration with a popular CRM has a known edge case above a certain account size. A third had a painful go-live last quarter that quietly changed how they sequence the kickoff. None of this is written down anywhere a successor could find it.
A company brain doesn't replace the implementation team. It encodes what the experienced operator already knows (which configurations have caused churn before, which industries need which compliance setup, the integrations with known edge cases, the lessons from past go-lives) and lets an agent assemble the right onboarding plan for each new customer inside those constraints, surface anything novel to a human, and update the playbook when something new gets learned. The team spends less time reconstructing context and more time on the parts of onboarding that genuinely need a human: the relationship, the executive sponsor, the strategic fit.
The same shape applies to incident response (where the unwritten knowledge is which runbook has actually worked the last three times this fired, who owns the affected system right now, and what dependencies break in cascade) and to almost any operational workflow where people currently spend more time remembering where things live than actually deciding. In every case, the agent isn't replacing judgment. It's clearing the path so judgment can happen faster.
We built one for ourselves
At AE Studio, we built a company brain over the past few months to solve a specific problem: too much of what made each team effective lived in the heads of the people who'd been there longest. We've been iterating on it since, improving its performance and making it more scalable to other departments across the org. The teams that have started using it have found uses we didn't predict.
Our project success team uses it to surface patterns from past engagements (recurring stakeholder dynamics, the early signs of scope drift, sequencing decisions that correlated with smoother deliveries) and turn them into checks PMs run on active projects today. None of this is new work. Our PMs have always done this kind of pattern-matching. The brain just does the heavy lifting faster, which frees them to focus on the parts of the job that actually need a human: aligning stakeholders on the vision and coaching the team.
Our people team uses it to map who's doing what kind of AI work across the company, surfacing it to others and turning that into momentum for further adoption. Our sales team built a surface on top of it that pulls together what we've learned across past engagements, so that a strategist on a prospect call has the relevant precedents at hand instead of trying to remember which past client was analogous.
None of these were the original goal. Each one is a separate team finding that the same underlying knowledge layer powers a different workflow. That's the tell. When something is a primitive, different teams find uses for it that the original authors didn't anticipate.
We're now building them for clients
We're working with several companies right now where the engagement starts with agentic automation. They want agents handling parts of their workflow, and they want them in production. The first thing we do, before we build a single agent, is build the brain.
Two reasons.
The first is that it lets us build quickly. With the brain in place, each new agent we build for the client is mostly composition: wiring the right skills, policies, and provenance together for the workflow. Without it, every agent project is a from-scratch knowledge-extraction project disguised as an engineering project, and the engineering ends up being the cheap part.
The second is that it lets the client build for themselves after we leave. A company brain is the kind of asset that compounds. Once it exists and is being maintained, the marginal cost of the next agent (the one we're not building, the one their internal team builds six months later) collapses. They can spin up new automations on top of an asset they own and understand, instead of waiting for another agency to come back and do it for them.
That second reason is the one we care more about. Most of the AI agency work being sold right now is the first kind: project-by-project, one agent at a time, knowledge extracted and then thrown away. We don't think that's how this is going to play out. We think the companies that win the next decade are the ones that build a real knowledge layer once and then compound on it.
Where to start, if you're going to build one
You shouldn't try to encode the whole company at once. You'll fail, and you'll generate a year of demoralizing work in the process. A few patterns we've watched separate the projects that work from the ones that stall:
Start with one workflow, end-to-end. Pick the one where humans currently spend the most time remembering and the least time deciding. That's where the brain has the highest leverage.
Capture from observation, not interviews. People will tell you the policy as they remember it. The actual policy is what they do. Watching the work (Slack threads, ticket comments, transcripts of escalations) gives you ground truth that interviews don't.
Treat the skills file as code, not docs. If it lives in Notion, it will rot. If it lives in version control with tests, it will survive.
Keep humans in the loop until the failure modes are mapped. The first version of any agent will fail in ways you didn't predict. The brain helps you contain the failures. Humans help you discover them.
Build the provenance layer from day one. Knowing when each fact was last verified is what separates a brain that gets better over time from one that gets quietly worse.
What this means going forward
Blomfield ends his RFS with a line we keep coming back to: every company in the world is going to need one. We think he's right, and we'd add a sharper version of the same claim.
The companies that build their brain first will compound. Their agents will get better with every engagement, every ticket, every incident. The ones that don't will keep paying humans to remember things, and they'll wonder why their AI investments aren't producing the operating leverage everyone promised.
The model isn't the moat. The company brain is.