Evolution of Safety

Yesterday I was pleased to speak at the Canadian Society of Safety Engineering (CSSE)  Fraser Valley branch dinner.  I chose to change the title from the Future of to the Evolution of Safety.  Slides are available in the Downloads or click here.  The key messages in the four takeaways are listed below.

1. Treat workers not as problems to be managed but solutions to be harnessed.

Many systems have been designed with the expectation  humans will perform perfectly like machines. It’s a consequence of the Systems Thinking era based on an Engineering paradigm. Because humans are error prone, we must be managed so that we don’t mess up the ideal flow of processes using technologies we are trained to operate.

Human & Organizational Performance (HOP) Principle #1 acknowledges people are fallible. Even the best will make mistakes. Despite the perception humans are the “weakest link in the chain”,  harnessing our human intelligence will be critical for system resilience, the capacity to either detect or quickly recover from negative surprises.

As noted in the MIT Technology Review, “we’re seeing the rise of machines with agency, machines that are actors making decisions and taking actions autonomously…” That means things are going to get a lot more complex with machines driven by artificial intelligence algorithms. Smart devices behaving in isolation will create conflicting conditions that enable danger to emerge. Failure will occur when a tipping point is passed.

MIT Professor Nancy Leveson believes technology has advanced to such a point that the routine problem-solving methods engineers had long relied upon no longer suffice.  As complexity increases within a system, linear root cause analysis approaches lose their effectiveness. Things can go catastrophically wrong even when every individual component is working precisely as its designers imagined. “It’s a matter of unsafe interactions among components,” she says. “We need stronger tools to keep up with the amount of complexity we want to build into our systems.” Leveson developed her insights into an approach called system theoretic process analysis (STPA), which has rapidly spread through private industries and the military. It would be prudent for Boeing to apply STPA in its 737 Max 8 investigation. 

So why is it imperative that workers be seen as resourceful solutions?  Because complex systems will require controls that use the  immense power of the human brain to quickly recognize hazard patterns, make sense of bad situations created by ill-behaving machines, and  swiftly apply heuristics to prevent plunging into the Cynefin Chaotic domain.

2. When investigating, focus on the learning gap between normal deviation / hazard and avoid the blaming counterfactual.

If you read or hear someone say:
“they shouldn’t have…”
“they could have…”
“they failed to…”
“if only they had…”
it’s a counterfactual. In safety, counterfactuals are huge distractions because they focus what didn’t happen. As Todd Conklin explains, it’s the gap between the  black line (work-as-imagined) and the blue line (work-as-done). The wavy blue line indicates that a worker must adapt performance in response to varying conditions. The changes hopefully enable safety to emerge so that the job can successfully completed. In the Safety-II view, this is deemed normal deviation. Our attention should not be on “what if” but “what” did.

The counterfactual provides an easy path for assigning blame. “If only Jose had done it this way, then the accident wouldn’t have happened.”  Note to safety professionals engaged in accident investigations: Don’t give decision makers bullets to blame but information to learn. The learning from failure lessons are in the gap between the blue line and the hazard line.

3. Be a storylistener and ask storytellers:
How can we get more safety stories like these, fewer stories like those?

I described the ability to generate 2D contour maps from safety stories told by the workforce.  The WOW factor is we now can visually see safety culture as an attitudinal map. We can plot a direction towards a safety vision and monitor our progress.  Click here for more details.

Stories are powerful. Giving the worker a voice to be heard is an effective form of employee engagement. How safety professionals use the map to solve safety issues is another matter. Will it be Ego or Eco? It depends. Ego says I must find the answer. Eco says we can find the answer.

Ego thrives in hierarchy, an organizational  structure adopted from the Church and Military. It works in the Order system, the Obvious and Complicated domains of the Cynefin Framework. Just do it. Or get a bunch of experts together and direct them to come up with viable options. Then make your choice.

Safety culture resides in the Cynefin Complex domain. No one person is in charge. Culture emerges from the relationships and interactions between people, ideas, events, and as noted above, machines driven by AI algorithms. Eco thrives on diversity, collaboration, and co-evolution of the system.

An emerging role for safety professionals is helping Ego-driven decision makers understand they cannot control human behaviour in a complex adaptive system. What they control are the system constraints imposed as safety policies, standards, rules. They also set direction when expressing they want to hear “more safety stories like these, fewer stories like those.”

And less we forget, it’s not all about the workers at the front line. Decision makers and safety professionals are also storytellers. What safety stories are you willing to share? Where would your stories appear as dots on the safety culture map?

Better to be a chef and not a recipe follower.

If Safety had a cookbook, it would be full of Safety Science recipes and an accumulation of hints and tips gathered over a century of practice. It would be a mix of still useful, questionable (pseudoscience), emerging, and recipes given myth status by Carsten Busch.

In the Cynefin Complex and Chaotic domains, there are no recipes to follow. So we rely on heuristics to make decisions. Some are intuitive and based on past successes – “It’s worked before so I’ll do it again.” Until they don’t because conditions that existed in the past no longer hold true. More resilient heuristics are backed by natural science laws and principles so they withstand the test of time.

By knowing  the art and principles of cooking, a chef accepts the challenge of ambiguity and can adapt to unanticipated conditions such as missing ingredients, wrong equipment, last-minute diet restrictions, and so on.

It seems logical that safety professionals would want to be chefs. That’s why I’m curious in the study An ethnography of the safety professional’s dilemma: Safety work or the safety of work?  a highlight is “Safety professionals do not leverage safety science to inform their practice.”
Is it worth having a conversation about, even collecting a few stories? Or are safety professionals too time-strapped doing safety work?