In brief:
- The Health and Safety Executive (HSE) has researched the widespread use of AI across UK industries, identifying numerous applications in maintenance, safety management and operational control.
- While AI offers significant safety benefits, the HSE’s findings have also highlighted new dangers, including worker over-reliance, stress and the potential for system inaccuracies or failures.
- Industry has begun to implement mitigation strategies, and the HSE will use this deepened understanding to inform its regulation of AI in workplace safety.
In detail:
The Health and Safety Executive (HSE) has lifted the lid on how AI is being used across the industries it oversees – from construction and manufacturing to offshore energy and waste – and what new dangers AI can introduce.
Its research has identified around 250 real-world cases where AI is reshaping maintenance, equipment control, safety management and worker monitoring.
Among the four big categories of use identified, AI-powered maintenance was found to lead the charge: drones using computer vision to inspect hard-to-reach areas like bridges or confined spaces, predictive maintenance analysing industrial data to flag repairs, and AI scanning images or video to catch early signs of equipment failure.
The research identified AI use in health and safety management tools, including AI analysing past accident reports to uncover hidden hazards, generative AI writing risk assessments and training materials, and large language models answering live safety questions.
In equipment control, AI was found to be managing the movements of quarry vehicles, farm machinery and warehouse robots, preventing dangerous bin lifts and optimising how process plants run.
For occupational monitoring AI has been found to watch workers’ use of protective gear, track proximity to vehicles, scan for spills and even analyse fatigue or exposure to vibration.
But while many of these new tools promise faster inspections, predictive repairs, driverless machines and automated risk assessments, the HSE’s findings have revealed a darker side too.
Anonymous industry feedback gathered by the regulator highlighted major health and safety concerns that come with handing control to algorithms.
Workers risk becoming too reliant on AI systems, the survey revealed, leading to reduced human attention and even “deskilling” of the workforce.
Some reported rising stress levels from algorithmic management and “warning fatigue” triggered by constant system alerts.
Technical fears were just as stark. Industry voices flagged inaccurate AI safety assessments, systems behaving unpredictably when pushed beyond design limits, failures to warn about real dangers – or false alarms that erode trust.
The risk of hackers breaching AI systems or biased, flawed data driving unreliable safety decisions also loomed large.
To mitigate the risks, companies surveyed said they were trialling AI in controlled conditions, using diverse data sets, encrypting systems, setting up fail-safes, training staff and auditing performance.
But challenges persist, the survey has found, from integrating AI with outdated equipment to winning over wary employees.
Looking ahead, the HSE has found that firms are aiming to increase real-time incident monitoring, expanding predictive maintenance, rolling out fully autonomous machines and harnessing AI for live building occupancy tracking.
The HSE said its deepened understanding would help it regulate AI in industrial settings and shape the future of workplace safety.