The Cobot Conundrum: How AI is Rewriting the Rules of Robot Safety in the Age of Recycling
Imagine a factory floor from thirty years ago. It was a loud, rhythmic ballet of sparks and steel. At the center of this dance were the industrial robots: massive, articulated arms of orange or yellow metal, welding car chassis or lifting engine blocks with terrifying speed. But there was always a strict rule governing these machines, a rule enforced by physical cages, light curtains, and pressure mats.
The rule was simple: Humans stay out
If a human worker crossed the yellow line, the system tripped, and the robot froze instantly. Safety was binary. It was Us versus Them.
Fast forward to today, and that paradigm is crumbling. We are entering the era of Industry 5.0, a vision of manufacturing that places the human being back at the center of the production process. The cages are coming down. The robots are getting smaller, smarter, and more sensitive. We call them Cobots (Collaborative Robots). They are designed to work shoulder-to-shoulder with us, handing us tools, holding heavy parts while we screw them in, and acting as intelligent partners rather than mindless automatons.
But this new proximity brings a terrifying new variable into the safety equation: Uncertainty.
When a robot is locked in a cage, its environment is predictable. When a robot is working next to a human, specifically in the chaotic world of disassembly (taking things apart for recycling), nothing is predictable. A screw might be rusted. A battery might be leaking. The human might drop a wrench or move unexpectedly.
How do we keep people safe when the variables are infinite?
A groundbreaking new study published in Robotics and Computer–Integrated Manufacturing by researchers at Polytechnique Montreal proposes a futuristic solution. They suggest that the only way to manage this complexity is to give the robot a brain capable of reasoning about safety in real-time. By combining classic safety engineering with the cutting-edge power of Large Language Models (LLMs) and Knowledge Graphs, they have created a system that doesn’t just follow rules, it understands risks.
This is the story of how AI is learning to keep us safe from the machines we built to help us.
%
of Incidents Occur During Non-Routine Tasks
While cobots are safe during normal operation, industry data suggests that 90% of cobot-related incidents happen during programming, maintenance, or troubleshooting. Source: PatentPC. Industry Safety Analysis (2025)
%
of Failures Are Human Error
Despite the focus on mechanical safety, reports indicate that 86% of cobot incidents are attributed to human error rather than mechanical robot failure. Source: PatentPC. Occupational Safety Reports
The Wild West of Disassembly
To understand why this research is so vital, we first have to understand the unique hell that is disassembly.
Building a car (assembly) is relatively easy for a robot. The parts are brand new, clean, and arrive in the exact same orientation every time. The robot knows that the bolt is at coordinates (x, y, z). It moves there, drives the bolt, and repeats.
Disassembly is the opposite.
Imagine a recycling center processing Electric Vehicle (EV) batteries.
- Condition: One battery might be pristine; the next might be dented from a crash.
- Corrosion: Fasteners might be stripped, rusted, or covered in grime.
- Modifications: Maybe a mechanic modified the casing five years ago and didn’t document it.
- Hazards: We are dealing with high-voltage electricity, toxic chemicals, and heavy loads.
In this environment, a pre-programmed robot is useless. It tries to unscrew a bolt that isn’t there, or it grips a brittle plastic casing too hard and shatters it. This is why humans are still essential. We have the dexterity to handle the unexpected. But we are also fragile.
The Human Cost
The paper highlights that Human-Robot Collaboration (HRC) in disassembly is fraught with ergonomic and physical risks.
- Musculoskeletal Disorders: Operators often have to twist their bodies into awkward postures to reach components.
- Cognitive Load: The worker isn’t just doing their job; they are constantly watching the robot, trying to predict its next move. This mental fatigue leads to mistakes.
- The Crush Risk: Even safe cobots can trap a human hand against a hard surface if the sensors aren’t perfectly tuned to the specific context.
We need robots to help lift the heavy stuff and handle the toxic materials, but we can’t trust them to improvise. Until now.
The Old Guard of Safety (and Why It’s Failing)
Before we look at the AI solution, we must respect the giants of safety engineering that got us here. The industry relies on two major methodologies to assess risk. If you work in engineering, these are your bread and butter. If not, think of them as the checklist on steroids.
1. FMEA (Failure Mode and Effects Analysis)
FMEA is a bottom-up approach. You look at every single part of a machine and ask, What happens if this breaks?
- The Component: The robot gripper.
- The Failure Mode: It loses pressure.
- The Effect: The battery module falls on the worker’s foot.
FMEA assigns a Risk Priority Number (RPN) based on three factors: Severity, Occurrence and Detectability.
It’s systematic and thorough. However, the researchers point out a fatal flaw: It is static. An FMEA is a document written by humans before the factory even opens. It lives in a binder (or a PDF). It cannot react to the fact that today the floor is slippery, or today the robot is handling a different chemical.
2. STPA (System-Theoretic Process Analysis)
STPA is a newer, top-down approach based on systems theory. Instead of looking at broken parts, it looks at Unsafe Control Actions (UCAs). It views safety as a control problem, not a failure problem.
- Scenario: The robot gripper is working perfectly (no mechanical failure), but the software tells it to open too early.
- Result: The battery falls.
STPA is brilliant at finding these logic errors and interaction problems, which are common in complex software systems. But like FMEA, it is manual, resource-intensive, and relies heavily on human experts.
The Gap
The paper argues that neither of these methods can keep up with Industry 5.0. In a dynamic disassembly line, the risks change every minute. We need a safety officer who never sleeps, knows every regulation by heart, and can watch the robot in real-time.
Since we can’t clone human safety experts, we turned to Artificial Intelligence.
Enter the Large Language Model
We are all familiar with LLMs like ChatGPT. They can write poetry, debug code, and summarize history. But can they prevent an industrial accident?
The researchers at Polytechnique Montreal explored a fascinating question: Can we teach an LLM to think like a safety engineer?
The Problem with Vanilla AI
If you ask a standard GPT model: How do I safely take apart this EV battery with a robot?, it will give you a very confident answer. It might say:
Ensure the robot holds the battery firmly and use a wrench to loosen the terminals.
This sounds good, right? Wrong.
Standard LLMs suffer from Hallucinations. They lack specific domain knowledge. In this hypothetical answer, the AI failed to mention ISO/TS 15066 (the standard for collaborative robot force limits). It didn’t mention checking for thermal runaway. It didn’t mention high-voltage disconnect protocols.
In a chat interface, a bad answer is annoying. On a factory floor, a bad answer is lethal.
The Solution: RAG + Knowledge Graph
To fix this, the researchers didn’t just ask the AI. They built a sophisticated architecture involving Retrieval-Augmented Generation (RAG) and a Knowledge Graph (KG).
Let’s break down these buzzwords because they are the future of industrial AI.
1. Retrieval-Augmented Generation (RAG)
Imagine taking a test.
- Standard LLM: Taking the test from memory. You might remember a lot, but you might also make things up to fill the gaps.
- RAG: Taking the test with an open textbook next to you. Before answering, you look up the specific chapter, find the exact paragraph, and then formulate your answer based on that text.
The researchers fed the system a massive library of academic papers, accident reports, and technical manuals.
2. The Knowledge Graph (KG)
This is the secret weapon. A Knowledge Graph is a structured web of facts. It maps relationships between concepts.
- Concept A: “High Voltage Cable”
- Relationship: “Requires”
- Concept B: “Dielectric Gloves”
- Relationship: “Governed by”
- Concept C: “ISO 10218-2”
By forcing the AI to consult this structured web of logic derived from ISO and IEC standards, the system cannot simply guess. It is anchored in regulatory reality.
The Methodology - Building the Safety Brain
The study, Proactive safety reasoning in human-robot collaboration in disassembly through LLM-augmented STPA and FMEA, tested four different configurations to see which one was the smartest safety officer.
The Four Contenders
- TF-IDF (Term Frequency-Inverse Document Frequency): The Old School search method. It looks for keywords. If you ask about Battery, it finds documents with the word Battery. Simple, but lacks context.
- Fine-tuned LLM: Taking a model (like GPT-3.5) and training it specifically on safety data. It changes the model’s internal weights.
- RAG: The Open Book method we discussed above.
- RAG + KG: The Open Book method combined with the structured logic of the Knowledge Graph.
The Models
They didn’t just test one AI. They put the heavyweights against the open-source underdogs:
- Proprietary: GPT-3.5 Turbo, GPT-4o, GPT-4.1.
- Open Source: Qwen2.5 (3B) and Ministral (3B).
The Case Study: EV Battery Disassembly
To prove this works, they applied it to a real-world nightmare scenario: Disassembling an Electric Vehicle Battery Module.
This task has everything:
- Chemical Hazard: Electrolyte leakage.
- Electrical Hazard: High voltage shock.
- Mechanical Hazard: Heavy modules, crushing risks.
- Explosion Risk: Thermal runaway.
The system had to identify Unsafe Control Actions (e.g., Robot applies excessive force to a swollen battery cell) and propose Mitigation Strategies (e.g., Engage impedance control mode per ISO/TS 15066).
The Results - A New Champion
This configuration was the undisputed champion. It achieved the highest scores across almost every metric. But how do we measure success in safety? The researchers couldn’t just use standard language metrics (like Did the sentence flow well?). They had to invent new safety metrics.
New Metrics for a New Era
- Hazard Recall: Did the AI find all the hazards? Result: The system scored 92%. It found almost every hidden danger that human experts had identified.
- Compliance Precision: Did the AI cite the correct laws and standards? Result: 97%. When it told the robot to slow down, it cited the exact clause in ISO 10218.
- Safety Violation Rate: Did the AI ever give bad advice that could hurt someone? Result: Zero. This is the most critical number. The RAG + KG system essentially eliminated the hallucination risk for dangerous actions.
Why did the others fail?
- Fine-tuning alone wasn’t enough. The models would sometimes forget specific regulations or drift into generic advice.
- Simple RAG was good, but sometimes retrieved irrelevant documents.
- Open Source Models (Small): The smaller models (3 Billion parameters) struggled with complex reasoning. They could identify the hazard but often failed to suggest the complex, multi-step solution required by STPA.
What This Means for the Future of Work
The implications of the Jalali Alenjareghi et al. study extend far beyond battery recycling. This framework represents a fundamental shift in how we view industrial safety.
1. Real-Time Risk Assessment
Currently, if a factory wants to change a production line, they stop everything for weeks to do a risk assessment. With this AI framework, the system could theoretically assess the safety of a new robot motion in milliseconds. This enables Agile Manufacturing, factories that can change their product daily without compromising worker safety.
2. The Democratization of Expertise
Not every recycling center can afford a team of PhD-level safety engineers. By embedding this expertise into an AI assistant, smaller facilities can operate with the same safety standards as global automotive giants. It levels the playing field.
3. Human-Centric (Industry 5.0)
The system pays special attention to Ergonomics. It doesn’t just ask Will the robot hit the human?” It asks, “Is the robot forcing the human to bend over repeatedly?
By integrating standards like ISO 11228 (Ergonomics – Manual handling), the AI can direct the robot to present the part at a comfortable height for the worker. It transforms the robot from a tool into a considerate teammate.
Challenges and The Road Ahead
Is this system ready to be plugged into every robot tomorrow? Not quite. The authors are transparent about the limitations.
The Black Box Problem
Even with RAG, deep learning models can be opaque. If the AI says Stop the line, the operator needs to trust it. The addition of the Knowledge Graph improves Explainability (the AI can say I stopped the line because of Rule X in ISO Standard Y), but building trust takes time.
Data Privacy
To work perfectly, the AI needs to see the workspace. This involves cameras and sensors monitoring the human worker. This raises significant privacy and ethical concerns. How much surveillance is acceptable in the name of safety?
Latency
Safety decisions need to happen in microseconds. Currently, querying a massive model like GPT-4.1 takes time (seconds). For Emergency Stop functionality, we still need hard-wired, non-AI sensors. This AI layer is better suited for proactive planning and supervisory control, rather than reflex-speed reactions.
Conclusion
The research from Robotics and Computer Integrated Manufacturing paints a hopeful picture. We are moving away from the era of dumb robots that require cages, toward wise robots that understand the context of their existence.
By augmenting classic, rigorous methods like STPA and FMEA with the fluid intelligence of LLMs and the structural strictness of Knowledge Graphs, we are creating a safety net that is both flexible and unbreakable.
In the disassembly lines of the future, the worker won’t just have a pair of gloves and a wrench. They will have an invisible, intelligent guardian watching over their shoulder, ensuring that every bolt turned and every battery lifted is done safely.
The future of manufacturing isn’t just about efficiency. It’s about empathy, encoded in silicon.
References
- Morteza Jalali Alenjareghi, Fardin Ghorbani, Samira Keivanpour, Yuvin Adnarain Chinniah, Sabrina Jocelyn, Proactive safety reasoning in human-robot collaboration in disassembly through LLM-augmented STPA and FMEA, Robotics and Computer–Integrated Manufacturing (Vol. 98, 2026).
- ISO 6385: Ergonomic principles in the design of work systems.
- ISO 9241 (Parts 125, 210): Ergonomics of human-system interaction.
- ISO 10075 (Parts 1, 2, 3): Mental workload principles and measurement.
- ISO 12100: Safety of machinery – General principles for design.
- ISO 14121: Risk assessment practical guidance.
- ISO 26800: General approach to ergonomics.
- ISO 13849 (Parts 1, 2): Safety-related parts of control systems.
- ISO 13850: Emergency stop functions.
- ISO 13851: Two-hand control devices.
- ISO 13855: Positioning of safeguards (approach speeds).
- IEC 61508 & IEC 62601: Functional safety of electrical/electronic safety-related systems.
- ISO 10218 (Parts 1, 2): Safety requirements for industrial robots and robot systems.
- ISO/TS 15066: Technical specification for collaborative robots (Cobots), defining force and speed limits for safe human interaction.
- ISO 8373: Vocabulary/Definitions for robotics.
Wanna know more? Let's dive in!
Japan 2019
Duration: 2 weeks Cities: Osaka, Tokyo, Hiroshima, Kyoto Miles Travelled: 9,000Japan in spring is pure magic. Spring felt like a moment suspended in time. The cherry blossoms were at their peak, casting a soft pink glow over temple roofs and narrow cobblestone lanes....
Implementing ISO 18404 in Your Organization: A Practical Guide
[dsm_gradient_text gradient_text="Implementing ISO 18404 in Your Organization: A Practical Guide" _builder_version="4.27.0" _module_preset="default" header_font="Questrial|||on|||||" header_text_align="center" header_letter_spacing="5px" filter_hue_rotate="100deg"...
The Essential Skills and Knowledge Areas Covered by ISO 18404
[dsm_gradient_text gradient_text="Mastering ISO 18404: Essential Lean Six Sigma Competencies and Knowledge Areas" _builder_version="4.27.0" _module_preset="default" header_font="Questrial|||on|||||" header_text_align="center" header_letter_spacing="5px"...
The Future of Lean and Six Sigma
[dsm_gradient_text gradient_text="The Future of Lean and Six Sigma: How ISO 18404 is Shaping the Industry" _builder_version="4.27.0" _module_preset="default" header_font="Questrial|||on|||||" header_text_align="center" header_letter_spacing="5px"...
ISO 18404 vs. Other Lean and Six Sigma Certifications
[dsm_gradient_text gradient_text="ISO 18404 vs. Other Lean and Six Sigma Certifications: What's the Difference?" _builder_version="4.27.0" _module_preset="default" header_font="Questrial|||on|||||" header_text_align="center" header_letter_spacing="5px"...
Calibration vs. Verification in ISO 17025
[dsm_gradient_text gradient_text="Calibration vs Verification in ISO/IEC 17025: A Laboratory Manager's Guide" _builder_version="4.27.0" _module_preset="default" header_font="Questrial|||on|||||" header_text_align="center" header_letter_spacing="5px"...
ISO 17025 for Small Labs: Scaling Accreditation Without Breaking the Bank
[dsm_gradient_text gradient_text="ISO 17025 for Small Labs: Scaling Accreditation Without Breaking the Bank" _builder_version="4.27.0" _module_preset="default" header_font="Questrial|||on|||||" header_text_align="center" header_letter_spacing="5px"...
ISO 17025, Laboratories, and AI: A New Era of Compliance and Innovation
[dsm_gradient_text gradient_text="ISO 17025, Laboratories, and AI: A New Era of Compliance and Innovation" _builder_version="4.27.0" _module_preset="default" header_font="Questrial|||on|||||" header_text_align="center" header_letter_spacing="5px"...
The Future of ISO/IEC 17025 and Its Impact on the Testing and Calibration Industry
[dsm_gradient_text gradient_text=" Looking Ahead: The Future of ISO/IEC 17025 and Its Impact on the Testing and Calibration Industry" _builder_version="4.27.0" _module_preset="default" header_font="Questrial|||on|||||" header_text_align="center"...
How ISO/IEC 17025 Powers Quality and Compliance in the Automotive Industry
[dsm_gradient_text gradient_text="How ISO/IEC 17025 Powers Quality and Compliance in the Automotive Industry" _builder_version="4.27.0" _module_preset="default" header_font="Questrial|||on|||||" header_text_align="center" header_letter_spacing="5px"...
Ensuring ISO 26262 Functional Safety with SHARC in Automotive Systems
[dsm_gradient_text gradient_text="Ensuring ISO 26262 Functional Safety with SHARC in Automotive Systems" _builder_version="4.27.0" _module_preset="default" header_font="Questrial|||on|||||" header_text_align="center" header_letter_spacing="5px"...
Driving the Future of EV Batteries: Advanced BMS Technologies and Trends
[dsm_gradient_text gradient_text="Driving the Future of EV Batteries: Advanced BMS Technologies and Trends" _builder_version="4.27.0" _module_preset="default" header_font="Questrial|||on|||||" header_text_align="center" header_letter_spacing="5px"...











