Before AI can help you, you have to know what you do
This is the one nobody wants to hear.
Before you can get real value from AI, you have to be able to describe your own work clearly. Not in a general way. In the specific, step-by-step, "here's what happens when an exception occurs" way that a system can actually follow.
Most organizations can't do this. Not because people are incompetent – because the knowledge lives in people's heads, and it's never had to go anywhere else.
A bookkeeping firm I worked with spent six weeks before touching any AI tools. They documented their workflows. They wrote down the business logic they'd been carrying around implicitly for years. They made explicit every decision point that had previously been handled by "you just know when you've done this long enough."
Six weeks felt slow. It turned out to be the fastest path.
When they finally connected AI to their processes, it worked. Because the organization had become legible — to itself, first, and then to the machines.
The side effects were significant. New staff onboarded faster. Edge cases got handled consistently. The founder stopped being the single point of failure for half the firm's institutional knowledge.
This is what I call the first transition: Making the organization legible to itself.
It's not a technology project. It's a documentation and clarity project that happens to unlock AI. The questions it forces you to answer:
- What do we actually do, step by step?
- Where do the exceptions live, and who handles them?
- What's the logic we've never written down?
The discomfort here is real. Making processes explicit exposes gaps, inconsistencies, and dependencies on people who might not always be there. It also creates the foundation for everything that comes after.
Where does the undocumented knowledge live in your organization? Reply and tell me – I'm especially curious about where the institutional knowledge is most at risk right now.
— Will