Back to Basics is a weekly feature that highlights important but possibly overlooked information that any EHS professional should know. This week, we examine the use of computer vision to identify and mitigate the causes of workplace incidents.
It’s hard to have a discussion about technology in environment, health, and safety (EHS) without mentioning artificial intelligence (AI). One aspect of AI that’s showing promise is computer vision, which pairs cameras with AI and machine learning tools to identify and communicate the root causes of workplace incidents.
As part of its Work to Zero initiative, the National Safety Council (NSC) in 2022 published Using Computer Vision as a Risk Mitigation Tool, a white paper that studied the role of this technology in safety. It looked at four computer vision systems, assessing their ability to identify risk, personal protective equipment, and workplace violence.
“This report finds computer vision technology, paired with sophisticated risk prediction algorithms, is capable of accurate, consistent, and automatic monitoring of worker personal protective equipment (PPE), risk identification, and the detection of violence and weapons in the workplace,” according to the executive summary. “In addition, this technology can be used to help monitor fatigue, substance impairment and other impairing conditions when driving. Comprehensive software is available and easy to deploy across various industrial environments—from construction to warehousing to manufacturing—taking existing closed circuit television (CCTV) feeds and providing intuitive, actionable dashboards for safety leaders.”
Barriers to implementing computer vision tech include pricing and privacy concerns, but the report noted that the increased number of systems and packages on the market may help drive prices lower. In addition, many of the systems allow users to anonymize employee information and likeness for enhanced privacy.
What is it?
Computer vision takes advantage of CCTV systems, which are used for security and workplace safety applications and are manually operated; this requires someone to sift through hours of images to search for incidents and determine how to prevent them in the future.
Using AI, computer vision systems can be trained to identify and communicate the root causes of workplace incidents. They can identify key objects that may lead to incidents such as working at height, moving machinery, or unstable items on shelves or ones that may help prevent injury, such as hard hats, high-visibility vests, and other PPE. The systems can monitor multiple workers and assess safety threats.
According to the NSC, computer vision systems can learn workers’ habits and understand what conditions lead to incidents, which in turn allows for better understanding, training, and observation to prevent future incidents. Computer vision is ideal for industries that regularly move material or use heavy machinery, such as manufacturing, logistics, construction sites, and industrial warehouses.
Some systems can be trained to recognize best practices and use them as a reference to assist with worker training and highlight deviations from good practices. They can also help in emergencies by noticing when a person stays in a place for an unusual amount of time and triggering an alert to the operator.
Limitations
Video quality can be an issue with a lot of CCTV networks, providing grainy, often unclear video feeds, which in turn can make it difficult for AI software to identify things like PPE or track objects. Image quality is improving, however.
Some systems may not be able to distinguish between situations like a conversation between workers that leads to a hug and an argument that leads to a physical fight, according to the report.
Additionally, the camera’s limited field of view may prevent the computer vision system from getting a full picture of the workspace.
Ethical concerns
Addressing the ethical concerns of AI use, the American Society of Safety Professionals (ASSP) in 2024 approved a set of fundamental principles regarding AI:
- Trust: AI should enhance occupational safety and health (OSH) professionals’ skills, not replace human judgment and decision-making. OSH professionals should oversee AI-driven OSH solutions and hazard remediation to ensure decisions consider context, ethics, and exposures.
- Transparency: Workers, managers, and leaders must be informed about the capabilities and limitations of AI technologies used in their work environments.
- Equity: OSH professionals should ensure that AI technologies do not aggravate existing disparities or introduce new forms of discrimination in practices related to workplace safety and health.
- Privacy: Organizations should implement safeguards to prevent unauthorized access, misuse, or exploitation of sensitive information collected by AI systems.
“By embracing AI thoughtfully and responsibly, we can harness its potential to improve workplace safety, protect workers, and create healthier work environments,” according to Defining the Role of AI in Safety by ASSP President Pamela Walaski, CSP, FASSP. “But we also know that the use of AI introduces challenges, including privacy risks and the need to manage emerging hazards.”