ESSAY

Why most enterprise AI projects fail before reaching production.

Three patterns I see at every fractional engagement. Each one is fixable in a week if you spot it early. They will eat your roadmap if you don’t.

6 min readby Sean Maraj

I have walked into seventeen enterprise AI projects in the last fourteen months. Eleven of them were already in trouble when I arrived; six of them shipped. The ones that died died for the same three reasons. The ones that shipped beat all three early.

1. The wrong owner

Enterprise AI projects get sponsored by the CIO. They should be owned by the function whose work the AI is replacing. Sponsorship is not ownership. The CIO funds infrastructure; the function lives or dies on the output. When you confuse the two, the project becomes a procurement exercise instead of a product one.

The fix: find the operating leader who would feel pain if the AI did not work. Make them the owner. Give them veto power on prompts, eval criteria, and rollout. The CIO still funds the infrastructure; the operating leader still has to live with the output.

2. The model gets the brief, not the data

Most enterprises start with the model. They argue Claude vs. GPT vs. Gemini vs. open-source for six weeks. Then they discover their warehouse is in five places, half their CRM is wrong, and the docs the agent needs to read are in a Google Drive nobody catalogued. The model was never the problem.

The fix: spend the first month on the data pipeline. Boring. Necessary. Audit every source, stitch every join, write the dbt models, set up the eval set. Only then does the model choice matter, and by then the choice has gotten obvious.

3. No one wrote down what success looks like

“We want to use AI for marketing” is not a target. “Generate 200 net-new MQLs/month at $40 CPL by day 60” is. Without a target, the project becomes a science fair. The team produces demos. The board sees demos. Demos do not pay payroll.

The fix: a single sentence, one metric, one deadline. Write it on the board. Refer to it in every standup. If the project does not move that metric by week six, escalate or kill it. Do not let demos buy you another quarter.

What the survivors do

The six projects that shipped had three things in common: a single operating owner, a month spent on data plumbing, and a one-sentence target on the wall. None of them had a fancier model. None of them had a bigger budget. They had clearer thinking and a willingness to kill what wasn’t working before it ate the quarter.

Most of the work in enterprise AI is not technical. It is editorial. Pick the right problem. Define success. Cut the scope. Then build.


Like this? Read the rest of the feed, or put me to work inside your company.

Want this kind of thinking inside your company?

Pick the engagement that fits the gap.

Three intakes, three different sets of questions, one promise: measurable output by day 30 or I keep working.

See engagements →