Why Manufacturing AI Projects Fail (And It's Not the AI)
Ali Parandeh, Founder at Build Your AI
Manufacturing companies have data that should be goldmines for AI. Sensors on production lines. Decades of maintenance records. Equipment failure patterns. The business case is obvious: predictive maintenance, defect detection, cost prevention.
But most AI pilots never make it to production. They die in the lab while the company that dreamed them up spends millions on the next attempt.
Ali Parandeh consults on this problem across rail, aerospace, automotive, and construction. His diagnosis isn’t that the AI is broken. It’s that the thinking is broken.
“They tried to build an agent and productionize it without properly testing it. They didn’t spend enough time during the pilot and prototyping stage making sure that first, the edge cases and bad cases are handled. And second, that the users will actually adopt it.”
This is a product problem, not a technology problem. Manufacturing companies are building features. They’re not building products.
The Two Ways AI Projects Die
There are two failure modes. Parandeh sees both repeatedly.
The first is technical. “The agents don’t have enough of a security layer. They don’t handle edge cases well. They’re not reliable enough. They’re not production grade.”
But reliability is table stakes. The second failure mode is where most projects actually die.
“They don’t have an AI adoption strategy. They don’t have a strategy for how to get people to start using it. Some companies just enforce usage. That’s one strategy. But if you build the tool and spend all this money and users don’t use it, that’s a waste of money.”
Think about that gap. A manufacturing company builds a system that’s technically sound but nobody trusts. The operator doesn’t trust the AI recommendation. The engineer doesn’t believe the anomaly detection. The manager doesn’t think it’s worth retraining people on. So it sits. And because it sits, they never learn what it’s actually good at.
The classic pattern: a predictive maintenance AI that works on test data completely fails to predict anything useful in the real environment. Not because the model is bad, but because the factory floor engineer doesn’t feed it the right data, doesn’t believe the alerts, and doesn’t change their maintenance schedule based on it.
Why Manufacturing Pilots Are Harder Than You Think
Manufacturing has constraints that software doesn’t. A failed email classification is a minor annoyance. A failed anomaly detection on a production line that makes safety-critical components is a liability.
“In safety-critical applications,” Parandeh explains, “the goal is not to replace humans, but to assist humans. AI can reduce a layer of human error. Humans make mistakes when they get tired, especially when they make mistakes. There’s an example with TFL trains in London where someone got drawn over by four different trains at night because the trains are all automated. The human operator normally does nothing most of the time, your brain goes on autopilot and you make mistakes.”
The solution wasn’t to remove the operator. It was to add a computer vision layer that alerts the operator when someone is on the tracks. AI as an assistant, not a replacement.
This is the adoption challenge: getting engineers and operators to trust an AI recommendation enough to act on it, but not enough to stop thinking. That balance requires product thinking.
“Users need to adopt it. They need to trust it. They need to understand how to use it. And the company needs to build an adoption strategy that makes that happen.”
How Pilots Should Actually Work
Parandeh runs pilots differently. He starts with discovery, not models.
“For each idea that comes up, we figure out what’s the best way to approach it. We figure out what’s the best project management approach, what’s the best way to explore the datasets, and then we plan it. Instead of just jumping straight to building the thing, you start with business planning and requirements planning first.”
This sounds like overhead. It’s the opposite. It’s the difference between building an AI system that works in the lab and one that works on the factory floor.
“Manufacturing companies have a stage-gate process. They do business planning, requirements planning. Then preliminary design, detailed design, construction, validation, commissioning, maintenance, decommissioning. It’s a waterfall approach. But with AI, it’s very iterative. You don’t know what you’re doing. It’s like you’re in a dark room.”
The insight is combining the two. Use stage gates to force clarity before you start building. Then iterate rapidly within each stage. And crucially: involve the people who will actually use the system from day one.
“The product mindset is always needed everywhere — whether you’re building a product for external customers or internal customers, the same thing. You’re helping with the strategy, figuring out how to scope projects and how to get the best ROI.”
The Adoption Infrastructure
This is where manufacturing companies typically fail. They assume “build it and they will come.” They don’t.
Parandeh’s adoption infrastructure for a successful pilot includes:
1. User involvement early. Not after you’ve built the thing. During design, during testing, during validation. The engineer who will use the system helps design it.
2. Training and documentation. Not a manual nobody reads. Real training on when to trust the AI, when to override it, how to interpret the output. This takes time, but it’s faster than rebuilding the system when nobody uses it.
3. Feedback loops. How does the operator tell the team when the AI is wrong? How does that feedback get back to the model? How does the system improve? If there’s no loop, trust degrades quickly.
4. Clear metrics. Not “the model is 95% accurate.” Metrics the business cares about: “downtime prevented,” “maintenance costs saved,” “safety incidents avoided.” The operator needs to see that the AI is delivering value.
Without these four, you have a system. With them, you have a product.
The Opportunity
The good news: most manufacturing companies are terrible at this. Which means the ones that figure it out have a massive advantage.
“Companies like Rolls-Royce, Boeing, Fiberglass producers, automotive companies — they’re starting to use AI across various scenarios. They’re seeing real results. Transport for London, Network Rail, manufacturing companies, they’re all doing it. Even in defense and construction.”
The difference between the companies winning and the ones burning money is this: the winners treat AI pilots like product projects. They start with requirements, involve users, plan for adoption, measure success in business terms.
Parandeh has seen it work. A fiberglass company used computer vision on their production line and predicted line failures seconds in advance. Automotive companies are predicting maintenance needs. Construction companies are predicting cost overruns.
“These cases are successful because they spent time doing the discovery, the requirements planning, the user adoption strategy, and the business case before they started building. And they didn’t just build it and throw it over the wall. They managed the adoption and they measured the outcomes.”
For manufacturing, the AI is the easy part now. The hard part — and the part that separates success from the graveyard of failed pilots — is everything else.
FAQ
What percentage of manufacturing AI projects actually make it to production?
Most don’t. Parandeh sees this repeatedly across the sector. A project looks promising during testing, but after deployment, adoption is low and the company moves on to the next pilot. The exact percentage is unknown, but industry observation suggests fewer than 30% of pilots become operational systems.
Why do engineers and operators distrust AI recommendations?
They’ve been bitten. Either the AI has been wrong before, or they’ve seen it fail in related projects. In safety-critical work, distrust is rational. The AI has to prove itself repeatedly before it earns trust. This takes time and adoption infrastructure that most projects skip.
What’s the difference between building an AI system and building an AI product?
A system is technically sound. A product is technically sound AND adopted by users AND delivers measurable business value. If users don’t trust it or don’t know how to use it, it’s a system sitting in the lab, not a product being used.
How much time should you spend in discovery before building?
Parandeh spends 10 days with a manufacturing company in discovery sessions. For a project that will run six months and cost hundreds of thousands, 10 days (less than 3% of the timeline) ensures you’re solving the right problem. It’s a bargain compared to building the wrong thing.
What happens if you skip the adoption strategy?
The system works in testing and fails in production. Users don’t use it. You spend money retraining the model on production data, but adoption never improves because the underlying issue isn’t the model — it’s that users don’t trust it or don’t see the value. You end up rebuilding from scratch or killing the project.
How do you get manufacturing engineers to believe an AI recommendation?
Transparency about how the AI works and why it made that specific recommendation. Real data, not just a confidence score. Gradual adoption where humans make the decision but the AI influences it. And measurable proof that following the AI’s recommendation led to good outcomes. That evidence builds over time.
What’s the business case for predictive maintenance?
Unplanned downtime costs manufacturing companies tens of thousands per hour. If AI can predict failures days in advance, the company schedules maintenance during planned downtime and avoids catastrophic failures. But the AI has to predict accurately and operators have to act on it. That adoption gap is where most projects fail.
Should manufacturing companies buy an off-the-shelf AI solution or build custom?
Off-the-shelf works for common problems (standard equipment, standard failure modes, lots of training data). Custom is needed for niche equipment, proprietary processes, and safety-critical applications. Either way, the adoption strategy is more important than the model choice.
How do you measure success for an internal AI project?
Not accuracy. Business metrics: downtime prevented, maintenance costs saved, safety incidents avoided, or quality defects reduced. The model can be 99% accurate and the project can fail if it doesn’t improve these metrics or if users don’t adopt it. Measure what matters to the operator and the CFO.
Why do manufacturing companies struggle with this more than software companies?
Manufacturing has longer decision cycles, higher cost of failure, more legacy systems, and safety regulations. The stakes are higher, so adoption is harder. But that also means the companies that get it right have a durable advantage. Software companies can iterate fast. Manufacturing must get it right the first time or the project dies in the stage-gate process.
What would Ali recommend to a manufacturing company starting an AI pilot?
Start with requirements planning, not modeling. Understand the problem first. Involve the people who will use the system. Plan for adoption and training as part of the project, not an afterthought. Measure success in business terms that matter to the operator. That foundation prevents most of the failures he sees.
Full episode coming soon
This conversation with Ali Parandeh is on its way. Check out other episodes in the meantime.
Visit the ChannelRelated Insights
Your Engineering Team Is 2-3 Years Behind on AI (And You Know Why)
Ali Parandeh, Founder at Build Your AI
B2B vs Consumer Voice Agents: Why They're Not Built the Same Way
Tom Shapland, PM at LiveKit
Why Vertical Integration Is the Only Way Deep Tech Actually Works
Jorge Colindres, Cofounder at Radical AI