Computer Vision on the Factory Floor: What's Working and What's Overhyped
Walk through any manufacturing trade show in 2026 and you’ll see computer vision everywhere. Every booth has a demo showing cameras detecting defects, monitoring assembly lines, or tracking inventory in real time. The marketing materials promise dramatic improvements in quality, safety, and efficiency.
I’ve spent the last quarter visiting manufacturing facilities in Australia, Germany, and Japan to see what’s actually deployed versus what’s still on the show floor. The reality is more nuanced than the vendor pitches suggest — but in the right applications, the technology genuinely delivers.
Quality Inspection: The Clear Winner
If there’s one computer vision application in manufacturing that’s proven its value, it’s automated quality inspection. The economics are compelling, and the technology has reached a level of reliability that makes CFOs comfortable.
Traditional visual inspection relies on human inspectors checking parts and products against quality standards. It’s slow, subjective, and inconsistent. Studies consistently show that human inspectors catch between 70-85% of defects under ideal conditions. Fatigue, distraction, and shift changes push that number lower.
Modern computer vision inspection systems — using a combination of high-resolution cameras, structured lighting, and deep learning models — routinely achieve 95-99% detection rates depending on the defect type. More importantly, they’re consistent. The system at 3am performs identically to the system at 9am.
Foxconn’s Shenzhen facility, which manufactures components for Apple and others, deployed a comprehensive vision inspection system in 2025 that reduced their escaped defect rate by 67%. Toyota’s Australian manufacturing advisory team reported similar results across several supplier facilities, with typical payback periods of 8-14 months depending on production volume and defect costs.
The technology has also become more accessible. Five years ago, deploying a vision inspection system required custom cameras, specialised lighting rigs, and months of data collection for model training. Today, companies like Cognex, Keyence, and Landing AI offer systems that can be configured for new inspection tasks in days rather than months. The combination of foundation vision models and transfer learning means you don’t need tens of thousands of training images to get started.
Safety Monitoring: Growing but Complicated
The second major application is workplace safety monitoring. Cameras watching for unsafe conditions — workers entering restricted zones, missing personal protective equipment, near-miss incidents — and triggering alerts in real time.
This is where I see the strongest growth trajectory over the next two years. Workplace injuries in manufacturing remain stubbornly common. Safe Work Australia’s latest data shows that manufacturing accounted for 15% of serious workplace injury claims in 2024-25, and the rates haven’t improved significantly despite decades of safety programs.
Computer vision safety systems offer something that traditional approaches can’t: continuous, objective monitoring. They don’t get tired, they don’t look away, and they don’t make exceptions for senior staff who skip PPE requirements.
But deployment is more complicated than quality inspection for several reasons. Privacy concerns are significant — workers understandably worry about being constantly monitored. Unions in Australia and Europe have pushed back against surveillance systems, and companies need to navigate consultation requirements carefully. The most successful deployments I’ve seen focus on zone monitoring and equipment detection rather than individual behaviour tracking, which reduces privacy objections while still capturing the highest-value safety alerts.
There’s also the false positive problem. Safety monitoring systems need to be extremely reliable, because excessive false alarms erode trust and lead to alert fatigue. A system that correctly identifies PPE violations 90% of the time but generates false positives 10% of the time will be ignored within weeks. The threshold for deployment-ready accuracy is higher than most vendors acknowledge.
Process Optimisation: Still Early
The third application category — using vision systems to optimise production processes rather than just monitor them — is where I see the most hype relative to reality.
The pitch is appealing: cameras observe the entire production process, AI analyses the data, and the system recommends (or automatically implements) changes to improve throughput, reduce waste, or optimise resource allocation. It’s the vision of a “self-optimising factory.”
Some specific use cases are working. Predictive maintenance based on visual inspection of equipment condition — detecting wear patterns, unusual vibrations (via high-speed cameras), or fluid leaks — is delivering real value at companies like Siemens and ABB. Assembly verification — confirming that each step of a complex assembly process was completed correctly before advancing to the next — is another winner.
But the broad “AI optimises everything” narrative remains aspirational. Manufacturing processes are complex systems with interdependencies that aren’t always visible to cameras. Temperature, humidity, material batch variations, and dozens of other factors influence outcomes in ways that vision alone can’t capture. The most effective process optimisation combines vision data with sensor data, ERP data, and human expertise rather than relying on cameras as a single source of truth.
The Deployment Realities
What the trade show demos don’t show you is the unglamorous work that makes computer vision actually function in a factory environment.
Lighting consistency. This is the single biggest source of deployment failures. Factory lighting changes throughout the day, and even small variations in illumination can cause false detections. Most successful deployments use controlled, structured lighting — which means modifying the production environment, not just adding cameras.
Edge vs. cloud processing. Real-time inspection requires inference speeds measured in milliseconds. Cloud processing introduces latency that’s unacceptable for high-speed production lines. Edge computing hardware — NVIDIA Jetson, Intel Movidius, and similar platforms — handles this, but adds complexity to deployment and maintenance.
Integration with existing systems. A vision system that generates alerts but doesn’t connect to the factory’s MES (Manufacturing Execution System), quality management system, or maintenance platform creates extra work rather than reducing it. Integration is typically the longest phase of any deployment.
Model drift. Production conditions change. New materials, new suppliers, seasonal temperature variations, equipment wear — all of these can cause a model that performed perfectly during initial deployment to degrade over months. Ongoing monitoring and retraining aren’t optional; they’re a permanent operational requirement.
The Practical Takeaway
Computer vision in manufacturing is real and delivering measurable results — in the right applications. Quality inspection is the most mature and lowest-risk starting point. Safety monitoring is growing rapidly but requires careful attention to privacy and accuracy requirements. Process optimisation has enormous potential but is still early for most factories.
If you’re evaluating computer vision for your manufacturing operations, start with a single, well-defined inspection task. Prove the value there. Build internal expertise. Then expand systematically. The factories getting the best results aren’t the ones that deployed vision everywhere simultaneously. They’re the ones that started small, learned what actually works in their specific environment, and scaled deliberately.
The technology is ready. The question is whether your organisation is ready for the disciplined, iterative approach that makes it work.