Your Engineering Team Is 2-3 Years Behind on AI (And You Know Why)
Ali Parandeh, Founder at Build Your AI
The software industry obsesses over the next GPT release. The aerospace industry is still debating whether it’s safe to use ChatGPT in a Slack channel.
This isn’t exaggeration. Ali Parandeh, a chartered mechanical and software engineer who consults across rail, construction, automotive, and aerospace, sees it firsthand: heavy industries run two to three years behind software on AI adoption. Not by accident. By institutional design.
“When I was running training sessions with heavy industries,” he explains, “I found that a lot of engineers are still unaware of what these tools can do. They’re not aware of the limitations and capabilities. Some companies have blanket bans on LLMs. Others don’t allow it on work machines. People are installing it on their personal laptops and sneaking it in.”
This gap exists because the rules that make sense for safety-critical systems create friction for everything else. And the friction compounds.
Why the Caution Is Real
It’s not paranoia. “Whereas with most engineering sectors, if you make a mistake, it can have a lot of problems,” Parandeh notes. “These companies want someone with enough AI expertise to teach them, or to consult. There’s a lot of liability.”
An error in a mobile app is a support ticket. An error in a bridge design is a catastrophic failure. These aren’t equivalent risks, so the caution isn’t equivalent either.
But that appropriate caution bleeds into inappropriate paralysis. Companies ban all LLM use instead of figuring out which use cases are safe. They require approvals that kill agility. They don’t train engineers on what AI can and can’t do, so engineers either avoid it or use it recklessly.
The result: while software engineers are shipping with AI-assisted code review, requirements analysis, and testing, engineering companies are still manually checking CAD files and requirements documents by hand.
The Real Cost
This isn’t harmless conservatism. It’s expensive.
“Companies that don’t innovate in the AI age will be left behind. I’ve been focused on this space because I believe a lot of industries have been around for years. These software have been around for years. They’re not innovated. They’ve been developed by incompetence and haven’t been innovated yet in the AI age.”
Parandeh sees outdated requirements management software in aerospace and rail that’s been essentially unchanged for a decade. He sees manual processes that should be AI-assisted still handled by spreadsheets and emails. He sees junior engineers wasting hours checking requirements compliance instead of thinking about the system.
Meanwhile, the software moving into these industries is getting faster, smarter, and increasingly AI-native. The gap widens.
“There are a lot of case studies online where engineering companies have started adopting AI — Transport for London, Network Rail, manufacturing companies like Rolls-Royce, Boeing. They’re all using AI. Even in defense, automotive, construction. The companies doing it see real results.”
But adoption is still a minority sport. Most heavy industry companies are still in the debate phase.
The Training Gap
The adoption lag compounds because nobody’s teaching engineers how to use these tools safely.
“When I was a head of engineering at a previous company, I would assume that people are going to use AI, so my expectation from them increased. But what I found is a lot of people didn’t even know how to use Claude or ChatGPT properly.”
Parandeh now runs training programs through UK professional institutions, teaching heavy industry engineers what AI can actually do. The response is hunger for knowledge.
“I created Build Your AI to help increase awareness in the sector about the technologies and how they work, their limitations and capabilities. And I do that through training through professional institutions that they trust, like Institutional Mechanical Engineers, Engineers Island.”
But institutional training takes time. Meanwhile, the edge cases companies worry about — hallucinations, liability, integration with legacy systems — keep mounting without anyone drawing guidelines about how to think about them.
How Adoption Should Work
When Parandeh consults, he reframes the problem. Instead of “can we use AI?” he asks “which tasks can AI safely assist with?”
“The best way to use AI is not necessarily to replace humans, but to assist humans. AI-assisted programs and software is the best use of AI, especially in safety-critical applications where AI can reduce a layer of human error.”
This flips the frame. Instead of “replace this person with an AI,” think “how can AI reduce the human error in what this person does?”
A requirements engineer manually checking 500 requirements documents for consistency is error-prone and slow. An AI that flags potential inconsistencies and the engineer who reviews them is fast and reliable. That’s assisted, not replaced.
“You want to make sure that the assignments you assume they’re going to use AI. So your expectation from the output increases. When I was a head of engineering, I would give people a task to build a dashboard on an API that fetches data or build a data pipeline. I would assume they’re going to use AI. So my expectation from them was first of all, they’re going to do it faster. So that’s the first expectation.”
But faster output with lower quality is worse. So the second expectation is quality. The AI is a tool that frees up cognitive space for thinking about architecture, edge cases, and implications.
The Institutional Problem
The real blocker isn’t technology. It’s organizational inertia.
“These companies listen. They basically want someone to have enough AI expertise to teach them or to consult. But the gatekeeping is high. They want formal qualifications. They want someone who is chartered and certified. That’s the market I’m going after.”
This makes sense for safety-critical work. It also makes adoption slower. A manufacturing company can’t just hire an AI engineer. They need someone who understands manufacturing, AI, safety, and has the credibility to influence engineering leadership.
The shortage of that expertise is real. Parandeh fills it, but he’s one person. Thousands of companies are waiting for the permission to move forward.
The Acceleration Window
There’s a window where adoption will tip. Once enough case studies exist, once training programs normalize, once the professional institutions endorse it, the caution becomes competitive disadvantage.
“The way forward is through training, through professional institutions, and through case studies. Show these companies examples where it worked, and worked safely. Show them how other companies do it. Remove the mystique.”
The companies moving now — Transport for London, Rolls-Royce, Boeing — will have learned the patterns. They’ll have the culture. They’ll have people trained in how to use AI safely.
The companies that wait for perfect clarity will find they’ve lost three years of productivity to the ones that figured it out.
“I think in these areas, I am placing bets on products where I think these can be very successful if you integrate AI correctly as an AI assistant layer. An enterprise requirements management software, an enterprise product that helps with analyzing and managing requirements, that’s where I see the opportunity.”
The gap exists. The companies that close it first will have an advantage that compounds.
FAQ
How far behind is heavy industry on AI adoption?
Based on what Parandeh sees in engineering, aerospace, rail, and construction, most heavy industry companies are 2-3 years behind software in adopting AI. While software engineers are shipping AI-assisted code review and testing, many engineering companies still have blanket bans on LLMs or limit use to personal devices. The gap is real and widening.
Why are manufacturing and engineering companies more cautious than software companies?
In software, a mistake means a support ticket. In manufacturing, a mistake means a failed bridge, a derailed train, or equipment failure. The liability is orders of magnitude higher. That appropriate caution is rational. The problem is when caution becomes institutional paralysis and stops all AI use instead of carefully enabling the safe uses.
Can you use AI in safety-critical engineering work?
Yes, but as an assistant, not a replacement. A requirements analysis AI that flags inconsistencies which an engineer reviews is safe. An AI that autonomously designs a bridge is not. The question isn’t whether to use AI, but where it can reduce human error while keeping humans in control.
What’s the cost of this 2-3 year lag?
Slower development cycles, higher manual labor costs, higher error rates in routine tasks, and competitive disadvantage vs. companies integrating AI. When competitors have AI-assisted design and analysis, companies still doing it by hand are slower and more expensive.
How do I convince my CFO that AI adoption is worth the risk?
Show case studies: Rolls-Royce using AI for maintenance, Boeing for quality, Transport for London for safety. These aren’t startups taking bets. They’re established companies in safety-critical industries that found safe ways to use AI. If they can do it, your industry can too. Focus on assisted use (AI helps humans, humans decide) not autonomous use.
What’s the difference between using AI in software engineering vs. manufacturing engineering?
Software engineering has been AI-forward because the cost of failure is low. A bad commit gets reverted. Manufacturing can’t revert a design error. So caution makes sense. But that caution shouldn’t block AI from assisting where it’s safe (documentation, analysis, testing). The companies that figure out the right guardrails will win.
Who should lead AI adoption in a manufacturing company?
Someone with both technical credibility and manufacturing context. Parandeh himself is chartered in both mechanical and software engineering, which is rare. A CTO or Chief Engineer who understands the specific risks in your industry is better than a generic AI consultant. The person has to earn trust within the engineering culture.
How do I train engineers to use AI safely?
Start with bounded use cases: AI assists with documentation, analysis, testing, requirements checking — things where the engineer remains the decision-maker. Train them on what AI can and can’t do. Be explicit about which use cases are approved and which aren’t. It’s not one training session — it’s culture change. Parandeh runs this through professional institutions because they carry credibility in the industry.
What’s the first step for a manufacturing company that’s behind on AI?
First step is education and awareness, not implementation. Train engineers on AI capabilities and limitations. Second is identifying which tasks can safely be AI-assisted (documentation, analysis, testing). Third is piloting with real engineers doing real work, not a lab experiment. Fourth is measuring value and adjusting. It’s a process, not a project.
Are there safety regulations that prohibit AI use in your industry?
Not blanket prohibitions. But there are requirements around traceability, reproducibility, and verification. An AI-assisted design has to be verifiable against requirements. An AI-generated analysis has to be auditable. Most regulations don’t specifically prohibit AI — they prohibit untraceability, which some AI workflows are. The companies winning are making AI use traceable and verifiable.
What’s the biggest risk in moving too fast on AI adoption?
Using AI autonomously in safety-critical decisions. An autonomous design tool, an autonomous analysis system, autonomous anything that impacts safety. The risk isn’t AI itself. It’s autonomy. Keep humans in the loop and the risk becomes manageable. Move fast with assisted workflows. Move slowly with autonomous ones.
Full episode coming soon
This conversation with Ali Parandeh is on its way. Check out other episodes in the meantime.
Visit the Channel