Category Archives: Sense-making

Why a Complexity-based Safety Audit makes sense

Imagine you work in a company with a good safety record. By “good”,  you are in the upper quartile as per the benchmarking stats in your industry.  Things were rolling along nicely until this past year. There was a steep increase in failures which has led to concerns over the safety culture.Universe made of stories

Historically there have 2 safety-related events but last year there were 10. Accident investigation reports show it’s not one category but several: Bodily reaction and exertion, Contact with equipment, Misuse of hazardous materials, Falls and falling objects. Fortunately there were no fatalities; most were classified as medical aids but one resulted in a serious injury.   Three medical aid injuries were from contacting moving equipment, two were related to improper tool and glove use. The serious injury was due to a worker falling off a ladder.

What upsets you is that the pre-job briefing did not identify the correct glove or the proper use of hazardous materials. You also read the near-miss incidents and heard disturbing rumours from the grapevine that a some recent close calls went unreported. Something needs to be done but what should you do?

One option is to do a safety audit. It will be highly visible and show executives and workers you mean business.  Phase 1 will consist of conducting an assessment and developing action plans to close any performance gaps. The gaps typically concentrate on strengthening safety robustness – how well practices follow safety policies, systems, standards, regulations, rules to avoid known failures. Phase 2 will implement the action plans to ensure that actions are being completed with quality and in a timely manner. A survey will gauge worker response. The safety audit project will end with a report that details the completion of the actions and observations on how the organization has responded to the implementation of the plan.

For optics reasons, you are considering hiring an external consultant with safety expertise. This expert ideally would know what to look for and through interviews and field observations pinpoint root causes. Action plans will be formulated to close the gaps.  If done carefully, no blame will be attached anybody. To ensure no one or group is singled out, any subsequent compliance training and testing will be given to all employees. Assuming all goes well, you can turn the page, close the chapter, and march on assuming all is well.  Or is it?

You hesitate because you’ve experienced safety audits before. Yes, there are short-term improvements (Hawthorne effect?) but eventually you noticed that people drifted back to old habits and patterns. Failure (personal injury or damage to equipment, tools, facilities) didn’t happen until years later, well after all the audit hubbub had dissipated. A bit of “what-iffing” is making you pause about going down the safety audit path again:

  1. What if the external safety consultants are trapped by their expertise because they already believe they have the solution and see  the job as implementing their solution and making it work? That is, what if they are great at using a hammer and therefore see everything as a nail, including a screw?
  2. What if the safety audit is built around a position that is the consultant’s ideal future state but not ours?
  3. What if the survey questionnaire is designed to validate what the safety consultants have seen in the past?
  4. What if front-line workers are reluctant to answer questions during interviews for feelings of being put on trial, fear of being blamed, or worse, subjects in a perceived witch hunt?
  5. What if safety personnel,  supervisors, managers, executives are reluctant to answer questions during interviews or complete survey questionnaires for the fear of being held accountable for failures under their watch?
  6. What if employees feel it’s very unsettling to have someone looking their shoulders recording field observations? What if the union complains because it’s deemed a regression to the Scientific Management era (viz., Charlie Chaplin’s movie ‘Modern Times’)?
  7. What if the performance gaps identified are measured against Safety Management System (SMS) outcomes that are difficult to quantify (e.g., All personnel must report near-miss incidents at all times)?
  8. What if we develop an action plan and during implementation realize the assumptions made about the future are wrong?
  9. What if during implementation a better solution emerges than the one recommended?
  10. What if the expenditure on a safety audit just reinforces what we know and nothing new is learned?

Are there other options besides a traditional safety audit? Yes, there is. And it’s different.

A sense-making approach boosts the capacity of people and organizations to handle their activities successfully, under varying conditions. It recognizes the real world is replete with safety paradoxes and dilemmas that workers must struggle with on a daily basis. The proven methods are pragmatic and make sense of complexity in safety in order to act. The stories gathered from the workforce including contractors often go beyond safety robustness (preventing failure) and provide insights into the company’s level of safety resilience. Resilience is the ability to quickly recover after a failure, speedily implement an unanticipated opportunity arising from an event, and respond early to an alert that a major catastrophe might be looming over the horizon.

The paradigm is not as an expert with deep knowledge of best practices in safety but as an anthropologist informed by the historical evolution of safety practices. The Santa Fe Institute noted companies operate in industries which are complex adaptive systems (CAS). Safety is not a product nor a service; it is an emergent property of a complex adaptive system. For instance, safety rules enable safety to emerge but too many rules can overwhelm workers and create confusion. If a tipping point is reached, danger emerges in the form of workers doing workarounds or deliberately ignoring rules to get work done.

Anthropologists believe culture answers can best be found by engaging the total workforce. The sense-making consultant’s role is to understand the decisions people have made. Elevating behaviour similarities and differences can highlight what forces are at play that influence people to choose to stay within compliance boundaries or take calculated risks.

By applying complexity-based thinking, here’s how  the what-if concerns listed above are addressed.

  1. Escape expertise entrapment.
    There are no preconceived notions or solutions. As ethnographers, observations that  describe the safety culture are recorded. Stories are easy to capture since people are born storytellers. Stories add context, can describe complex situations, and emotionally engage humans.
  2. Be mindful.
    You can only act to change the Present. Therefore, attention is placed on the current situation and not some ideal future state that may or may not materialize.
  3. Stay clear of cognitive dissonance.
    This leads to the confirmation bias — the often unconscious act of referencing only those perspectives that fuel pre-existing views.
    There are no survey questionnaires. Questions asked are simple prompts to help workers get started in sharing their stories. Stories are very effective in capturing decisions people must make dealing with unexpected varying conditions such as conflicting safety rules, lack of proper equipment, tension amongst safety, productivity, and legal compliance.
  4. Avoid confrontation.
    Front-line workers are not required to answer audit questions. They have the trust and freedom to tell any story they wish. It’s what matters most to them, not what a safety expert thinks is important and needs to interrogate.
  5. Treat everyone the same.
    Safety personnel,  supervisors, managers, executives also get to tell their stories. Their behaviours and interactions play a huge role in shaping the safety culture.  There is no “Them versus Us”; it’s anyone and everyone in the CAS.
  6. Make it easy and comfortable.
    There is minimal uneasiness with recording field observations since workers choose the topics.  A story with video might be showing what goes wrong or what goes right.  If union agents are present, they are welcomed to tell their safety stories and add diversity to  the narrative mosaic.
  7. Be guided by the compass, not the clock.
    Performance improvement is achieved by focusing on direction, not targeted SMS outcomes. This avoids the dilemma of workers through their stories identifying SMS as a problem.  Direction comes from asking: “Where do we want fewer stories like these and more stories like that?” The effectiveness of an performance improvement intervention is measured by the shift in subsequent stories told.
  8. Choose safe-to-fail over fail-safe.
    Avoid the time and effort developing a robust fail-safe action plan and then weakening it with CYA assumptions. When dealing with uncertainty and ambiguity, probe the CAS with  safe-to-fail experiments. This is the essence behind Nudge theory, introducing small interventions to influence behaviour changes.
  9. Sail, not rail.
    Think of navigating a ship on a uncontrollable sea of complexity from driving a train on a controllable track of certainty. Deviation manoeuvres like tacking and jibing  are expected. By designing actions to be small, emergence of surprising consequences can be better handled. Positive serendipitous opportunities heading in the desired direction can be immediately seized. On the other hand,  negative  consequences are quickly dampened.
  10. Focus on what you don’t know.
    A sense-making approach opens the individual’s and thus the company’s mindset to Knowledge (known knowns) as well as Ignorance (unknowns, unknowables).  New learning comes from exploring Ignorance. By sensing different behaviour patterns that emerge from the nudges, it becomes clearer why people behave the way they do. This discovery may lead to new ways to strengthen safety robustness + build safety resilience. This is managing the evolutionary potential of the Present, one small step at a time.

If you’re tired of doing same old, same old, then it’s time to conduct an “audit” on your safety audit approach and choose to do safety differently. Click here for more thoughts on safety audits.

A Complexity-based approach to Climate Change

I live in British Columbia, a province that began implementing a Climate Action Plan in 2008. Last year citizens were invited to submit their ideas and thoughts to help an appointed Climate Leadership team refresh the plan. The message I offered was that  climate change is a complex, not  a complicated problem.

You can analyze a complicated problem by breaking it down into parts, examining each piece separately, fixing it, and then putting the pieces all back together. In other words, the whole is equal to the sum of its parts.

In contrast, a complex problem cannot be reduced into parts but must be analyzed holistically because of the relationships amongst the various known and unknown pieces. The whole is greater than the sum of its parts.

The main lessons taught at colleges and universities focus on Newtonian physics, reductionism, cause & effect, linearity. Complexity science is only 30 years so it’s not surprising that concepts such as emergence, diversity, feedback loops, strange attractors, pattern recognition, self-organization, non-linearity thinking remain on the sidelines. Yet these phenomena of complexity are spoken in everyday language: going viral, butterfly effect, wisdom of crowds, tipping point, serendipity, Black Swans.

We are taught how to thinking critically and value being competent at arguing to defend our position. We apply deductive and inductive reasoning to win our case. Sadly, little time is invested learning how apply abductive reasoning and explore adaptation and exaptation to evolve a complex issue.

“I think the next century will be the century of complexity.”
Stephen Hawking January 2000

We are 15 years into the century of complexity. My submission applied complexity science to today’s climate change issues:

1. Climate change is a complex issue, not a complicated one. The many years of training has steered us to analyze parts as reductionists. Think of mayonnaise. You can’t break it down to analyze the ingredients. So spread holistically.

2. Stay open-minded. Delay the desire to converge and stop new information from entering. Don’t lock into some idealistic future state strategic plan. Remember, once you think you have the answer, you’re in trouble.

3. Don’t be outcome-based and establish destination targets. Be direction- oriented and deliberately ambiguous to enable new possibilities to emerge.

4. Adopt a sense-making approach – make sense of the present situation in order to act upon it.

Here is the link to the Climate Leadership team’s report and 32 recommendations. The word “complexity” appears twice. Hmm.

As an engaged BC citizen, I will carry on being a skeptic in a good sense and voice my opinions when I see a hammer nailing a screw.

Safety Culture, the Movie

SWForce
It’s the holiday season. One terrific way to celebrate as a family is to see a movie together. Our pick? Star Wars: the Force Awakens.
Well, that was an easy decision. The next one is harder though…what sort of experience do we want? Will it be UltraAVX, D-Box, IMAX, 3D, VIP, Dolby ATMOS or Surround sound, or standard digital? While sorting through the movie options, for some reason I began thinking about safety culture and had an epiphany. Safety culture is a movie, not a photo.

A photo would be a Star Wars poster, a single image that a designer has artistically constructed. It’s not the full story, just a teaser aimed to influence people to buy a ticket. We understand this and don’t expect to comprehend the entire picture from one poster. A safety culture survey or audit should be treated in the same fashion. All we see is a photo, a snapshot capturing a moment in time. Similar to the poster artist, a survey designer also has a preconceived idea; influence is in the form of questions. The range of questions extends from researched “best practices” to personal whim.

I believe this is a major limitation of the survey/audit/poster. It could totally miss what people actually experience as a movie. A movie introduces visual motion and audible sound to excite our human senses and release emotions and feelings. We can watch behaviours as well as their positive and negative consequences being delivered. With flow we are able to sense operating point movement and drift into the zone of complacency and eventual failure. A safety culture has sound that a photo cannot reveal. In a movie you can hear loud communication, quiet conversations, or lack of (e.g., cone of silence).

If we were to create “Safety Culture, the Movie”, what would we need to consider? I’ve compiled a short list. What would you add?

  • Human actor engagement
    • Actors on screen – lead characters, supporting players, cameo appearances, cast in crowds; front-line workers, supervisors, safety professionals, public at large
    • Actors behind the screen – investors, producer, director, music arranger, theatre owners, craft guilds; Board, execs, project managers, suppliers, unions
    • Actors in front of the screen – paying audience, theatre staff, movie critics; customers, safety associations, regulatory inspectors
  • Story line
    • Safety culture is one big story
    • Safety culture movie is neverending
    • Within the one big story are several side stories, episodes, subplots
  • Relationships between characters and roles played
    • Heroes, villains, maidens in distress, comic relief, clones
    • Contact is continuous and relationships can shift over time (compare to a snapshot audit focusing on one scene at a particular time slot)
    • What seems in the beginning to be independent interactions are often interconnected (“Luke…I am your father”) and may lead to a dilemma or paradox later
  • Theme
    • Overt message of the safety culture movie – Good triumphs Evil? Might makes Right? Focus on what goes wrong? Honesty? Respect?
    • Hidden messages – Resistance is futile? Pay attention to outliers? Do what I say, not what I do? It’s all about the optics
    • Significance of myths, legends, rituals in the safety culture – the Dark side, Jedi order, zero harm workplace, behaviour-based safety
  • Critique
    • What can we learn if our movie is scored on a Flixter (Rotten tomatoes) scale out of 100?
      • What does a score of 30, 65, 95 tell us about performance? Success?
      • We can learn from each critic and fan comment standing on its own rather than dealing with a mathematical average
    • Feedback will influence and shape the ongoing movie
      • Too dark, not enough SFX, too many safety rules, not enough communication
    • Artifacts
      • A poster provides a few while a movie contains numerous for the discerning eye
      • Besides the artifacts displayed on screen, many are revealed in the narratives actors share with each other off screen during lunch hours, breaks, commutes
      • What might we further learn by closely examining artifacts? For instance, what’s the meaning behind…
        • Leia’s new hairdo (a new safety compliance policy)?
        • The short, funny looking alien standing next to R2D2 and C3PO (a safety watcher?)
        • Why Han Solo can’t be separated from his blaster (just following a PPE rule)?
        • Rey using some kind of staff weapon, perhaps similar to the staffs used by General Grievous’ body guards in Episode III (is it SOP certified)?
        • The star destroyers eliminating the control towers which caused them so many problems in the Battle of Endor (implementation of an accident investigation recommendation)?
        • Improvement in the X-Wing fighters (a safety by design change after an action review with surviving pilots?)

If you’d like to share your thoughts and comments, I suggest entering them at the safety differently.com website.

Season’s greetings, everyone! And may the Force be with you.

Safety in the Age of Cognitive Complexity

ReductionismDuring engineering school in the late 1960s I was taught to ignore friction as a force and use first order approximate linear models. When measuring distance, ignore the curvature of earth and treat as a straight line. Break things down into its parts, analyze each component, fix it, and then put it all back together. In the 1990s another paradigm coined Systems Thinking came about we jumped from Taylorism to embrace the Fifth Discipline, social-technical systems, Business Process Reengineering. When human issues arose, we bolted on Change Management to support huge advances in information technology. All industries have benefited and been disrupted by business and technological breakthroughs. Safety as an industry is no exception.

In January 2000, Stephen Hawking stated: “I think the next century will be the century of complexity.” In the spirit of safety differently, let’s explore safety from a non-linear complexity science perspective. Safety is no longer viewed as a static product or a service but as an emergent property in a complex adaptive system (CAS). Emergence is a real world phenomenon that System Thinking does not address or perhaps, chooses to ignore to keep matters simple. Glenda Eoyang defines a CAS as “a collection of individual agents who have the freedom to act in unpredictable ways, and whose actions are interconnected such that one agent’s actions changes the context for other agents.”

Let’s be clear. I’m not advocating abandoning safety rules, regulations, hierarchy of controls, checklists, etc. and letting workers go wild. We just need to treat them as boundaries and constraints that either enable or prevent safety as a CAS property from emerging. By repurposing them in our mind, we can better see why rules prevent danger from emerging and why too many constraining rules might create the conditions for danger such as confusion, distraction, anger to emerge. As overloading increases, a tipping point is reached, the apex of a non-linear parabola curve. A phase transition occurs and a surprise emerges, typically the failure of a brittle agent.
As the eras of business have evolved from scientific management to systems thinking, so has safety in parallel. The graphic below is a modification of an Erik Hollnagel slide presented at the 2012 Resilience Learning Lab in Vancouver and extends beyond to an Age of Cognitive Complexity.

[Click to enlarge]

In this age of cognitive complexity, the whole is greater than the sum of its parts (aka agents).

  1. “Different is more” which means the greater the diversity of agents, the greater the distributed cognition. Think wisdom of crowds, crowdfunding, crowdsourcing.
  2. “More is different” which means when you put the pieces of a complex system together you get behavior that is only understandable and explainable by understanding how the pieces work in concert (see Ron Gantt’s enlightening posting). In a CAS, doing the same thing over and over again can lead a different result.
  3. ”Different is order within unorder” which means in a complex environment full of confusion and unpredictability, order can be found in the form of hidden patterns. Think of a meeting agenda that shapes the orderly flow of discussion and contributions of individuals in a meeting. In nature, think of fractals that can be found everywhere.

When working in a Newtonian-Cartesian linear system, you can craft an idealistic Future state and develop a safety plan to get there. However, in a CAS, predicting the future is essentially a waste of time. The key is to make sense of the current conditions and focus on the evolutionary potential of the Present.

Is the shift to complexity-based safety thinking sufficient to warrant a new label? Dare we call this different paradigm Safety-III? It can be a container for the application of cognition and complexity concepts and language to safety: Adaptive safety, Abductive safety reasoning, Exaptive safety innovation, Viral safety communication to build trust, Autopoietic SMS, Dialogic safety investigation, Heuristics in safety assessment, Self-organizing role-based crew structures, Strange attractors as safety values, Cognitive activation using sensemaking safety rituals, Feedback loops in safety best practices, Brittleness in life saving rules, Swarm intelligent emergency response, Human sensor networks, Narrative safe-to-fail experiments, Attitude real-time monitoring as a safety lead indicator, Cynefin safety dynamics. Over time I’d like to open up an exploratory dialogue on some of these on the safetydifferently.com website.

From a review of past and current publications, I sense compelling support for a complexity-based safety approach. Here are a few on my list (my personal thoughts are the bulleted points not in quotes.)

System Safety Engineering: Back to the Future (Nancy Leveson 2002)

  • ‘Safety is clearly an emergent property of systems.’
  • ‘It is not possible to take a single system component, like a software module, in isolation and assess its safety. A component that is perfectly safe in one system may not be when used in another.’

The Complexity of Failure (Sidney Dekker, Paul Cilliers, Jan-Hendrik Hofmeyr 2011)

  • ‘When accidents are seen as complex phenomena, there is no longer an obvious relationship between the behavior of parts in the system (or their malfunctioning, e.g. ‘‘human errors’’) and system-level outcomes.’
  • ‘Investigations that embrace complexity, then, might stop looking for the ‘‘causes’’ of failure or success. Instead, they gather multiple narratives from different perspectives inside of the complex system, which give partially overlapping and partially contradictory accounts of how emergent outcomes come about.
  • ‘The complexity perspective dispenses with the notion that there are easy answers to a complex systems event—supposedly within reach of the one with the best method or most objective investigative viewpoint. It allows us to invite more voices into the conver- sation, and to celebrate their diversity and contributions.’

Drift into Failure (Sidney Dekker 2011)

  • By taking complexity theory ideas like the butterfly effect, unruly technology, tipping points, diversity, we can understand that failure emerges opportunistically, non-randomly, from the very webs of relationships that breed success and that are supposed to protect organizations from disaster.
  • ‘Safety is an emergent property, and its erosion is not about the breakage or lack of quality of single components.’
  • ‘Drifting into failure is not so much about breakdowns or malfunctioning of components, as it is about an organization not adapting effectively to cope with the complexity of its own structure and environment.’

Safety-I and Safety-II (Erik Hollnagel 2014)

  • ‘Karl Weick in a 1987 California Management Review article introduced the idea of reliability as a dynamic non-event. This has often been paraphrased to define safety as a ‘dynamic non-event’…even though it may be a slight misinterpretation.’
  • ‘Safety-I defines safety as a condition where the number of adverse outcomes (accidents/incidents/near misses) is a low as possible.’
  • When there is an absence of an adverse outcome, it becomes a non-event which people take for granted. When people see nothing, they presume that nothing is happening and that nothing will continue to happen if they continue to act as before.
  • ‘Safety-II is defined as a condition where as much as possible goes right.’
  • ‘In Safety-II the absence of failures is a result of active engagement. This is not safety as a non-event but safety as something that happens. Because it is something that happens, it can be observed, measured, and managed.’
  • Safety-III is observing, measuring, maybe managing but definitely influencing changes in the conditions that enables safety to happen in a CAS. In addition, it’s active engagement in observing, measuring, maybe managing but definitely influencing changes in the complex conditions that prevent danger from emerging.

Todd Conklin PAPod 28 with Ivan Pupulidy (July 2015)

  • 18:29 ‘If we could start an emerging dialogue amongst our workers around the topic of conditions, we accelerate the learning in a really unique way.’
  • ‘We started off with Safety-I, Safety-II. That was our original model. What we really recognized rather quickly was that there was a Safety-III.’
  • ‘Safety-III was developing this concept of expertise around recognition of changes in the environment or changes in the conditions.’

Peter Sandman (Dave Johnson article, ISHN Magazine Oct 2015)

  • ‘The fast pace of change in business today arguably requires a taste for chaos and an ability to cope well with high uncertainty.’
  • ‘What is called ‘innovation’ may actually be just coping with an ever-faster pace of change: anticipating the changes on the horizon and adapting promptly to the changes that are already occurring. This sort of flexible adaptability is genuinely antithetical to orderly, rule-governed, stable behavioral protocols.’
  • ‘Safety has traditionally been prone to orderly, rule-governed, stable behavioral protocols. For the sorts of organizational cultures that succeed in today’s environment, we may need to create a more flexible, adaptable, innovative, chaos-tolerating approach to safety.’
  • ‘With the advent of ‘big data,’ business today is ever-more analytic. I don’t know whether it’s true that safety people tend to be intuitive/empathic – but if that’s the case, then safety people may be increasingly out of step. And safety may need to evolve in a more analytic direction. That needn’t mean caring less about others, of course – just using a different skill set to understand why others are the way they are.’
  • I suggest that different skill set will be based on a complexity-based safety approach.

Resilience Engineering Concepts and Precepts (Hollnagel, Woods & Leveson, 2006)

  • Four essential abilities that a system or an organisation must have: Respond, Anticipate, Monitor, Learn. Below is how I see the fit with complexity principles.
  • We respond by focusing on the Present. Typically it’s an action to quickly recover from a failure. However, it can also be seizing opportunity that serendipitously emerged. Carpe Diem. Because you can’t predict the future in a CAS, having a ready-made set of emergency responses won’t help if unknowable and unimaginable Black Swans occur. Heuristics and complex swarming strategies are required to cope with the actual.
  • We anticipate by raising our awareness of potential tipping points. We don’t predict the future but practice spotting weak signals of emerging trends as early as we can. Our acute alertness may sense we’re in the zone of complacency and need to pull back the operating point.
  • We learn by making sense of the present and adapting to co-evolve the system. We call out Safety-I myths and fallacies proved to be in error by facts based on complexity science. We realize the importance of praxis (co-evolving theory with practice).
  • We monitor the emergence of safety or danger as adjustments are made to varying conditions. We monitor the margins of maneuver and whether “cast in stone” routines and habits are building resilience or increasing brittleness.

What publications am I missing that support or argue against a complexity-based safety approach? Will calling it Safety-III help to highlight the change or risk it being quickly dismissed as a safety fad?

If you’d like to share your thoughts and comments, I suggest entering them at the safety differently.com website.

Danger was the safest thing in the world if you went about it right

I am now contributing safety thoughts and ideas on safetydifferently.com. Here is a reprint of my initial posting. If you wish to add a comment, I suggest you first read the other comments at safetydifferently.com and then includes yours at the end to join the conversation.

Danger was the safest thing in the world if you went about it right

This seemingly paradoxical statement was penned by Annie Dillard. She isn’t a safety professional nor a line manager steeped in safety experiences. Annie is a writer who in her book The Writing Life became fascinated by a stunt pilot, Dave Rahm.

“The air show announcer hushed. He had been squawking all day, and now he quit. The crowd stilled. Even the children watched dumbstruck as the slow, black biplane buzzed its way around the air. Rahm made beauty with his whole body; it was pure pattern, and you could watch it happen. The plane moved every way a line can move, and it controlled three dimensions, so the line carved massive and subtle slits in the air like sculptures. The plane looped the loop, seeming to arch its back like a gymnast; it stalled, dropped, and spun out of it climbing; it spiraled and knifed west on one side’s wings and back east on another; it turned cartwheels, which must be physically impossible; it played with its own line like a cat with yarn.”

When Rahm wasn’t entertaining the audience on the ground, he was entertaining students as a geology professor at Western Washington State College. His fame to “do it right “ in aerobatics led to King Hussein recruiting him to teach the art and science to the Royal Jordanian stunt flying team. While in Jordan performing a maneuver, Rahm in his plane plummeted to the ground and burst into flames. The royal family and Rahm’s wife and son were watching. Dave Rahm was instantly killed.

After years and years of doing it right, something went wrong for Dave Rahm. How could have this happen? How can danger be the safest thing? Let’s turn our attention towards Resilience Engineering and the concept of Emergent Systems. By viewing Safety as an emergent property of a complex adaptive system, Dillard’s statement begins to make sense.

Clearly a stunt pilots pushes the envelope by taking calculated risks. He gets the job done which is to thrill the audience below. Rahm’s maneuver called “headache” was startling as the plane stalled and spun towards earth seemingly out of control. He then adjusted his performance to varying conditions to bring the plane safely under control. He wasn’t pre-occupied with what to avoid and what not to do. He knew in his mind what was the right thing to do.

Operating pointWe can apply Richard Cook’s modified Rasmussen diagram to characterize this deliberate moving the operating point towards failure but taking action to pull back from the edge of failure. As the op point moves closer to failure, conditions change enabling danger as a system property to emerge. To Annie Dillard this aggressive head into, pulling back action was how Danger was the safest thing in the world if you went about it right.

“Rahm did everything his plane could do: tailspins, four-point rolls, flat spins, figure 8’s, snap rolls, and hammerheads. He did pirouettes on the plane’s tail. The other pilots could do these stunts, too, skillfully, one at a time. But Rahm used the plane inexhaustibly, like a brush marking thin air.”

The job was to thrill people with acts that appeared dangerous. And show after show Dave Rahm pleased the crowd and got the job done. However, on his fatal ride, Rahm and his plane somehow reached a non-linear complexity phenomenon called the tipping point, a point of no return, and sadly paid the final price.

Have you encountered workers who behave like stunt pilots? A stunt pilot will take risks and fly as close to the edge as possible. If you were responsible for their safety or a consultant asked to make recommendations, what would you do? Would you issue a “cease and desist” safety bulletin? Add a new “safety first…”rule to remove any glimmers of workplace creativity? Order more compliance checking and inspections? Offer whistle-blowing protection? Punish stunt pilots?

On the other hand, you could appreciate a worker’s willingness to take risks, to adjust performance when faced with unexpected variations in everyday work. You could treat a completed task as a learning experience and encourage the worker to share her story. By showing Richard Cook’s video you could make stunt pilots very aware of the complacency zone and over time, how one can drift into failure. This could lead to an engaging conversation about at-risk vs. reckless behaviour.

How would you deal with workers who act as stunt pilots? Command & control? Educate & empower? Would you do either/or? Or do both/and?

And now for something completely different

Screen Shot 2015-01-17 at 11.44.50 AM

Fans of Monty Python will quickly recognize this blog’s title. Avid followers will remember the Fish Slapping Dance.

 

Well, the time has come when many of us feel the need to slap traditional safety practices on the side of the head with a fish. If you agree, check out safetydifferently.com. What it’s about:

Safety differently. The words seem contradictory next to each other. When facing difficulties with a potential disastrous loss, it makes more sense to wish for more of what is already known. Safety should come from well-established ways of doing things, removed far from where control may be lost.

But as workplaces and systems become increasingly interactive, more tightly coupled, subject to a fast pace of technological change and tremendous economic pressures to be innovative and cutting-edge, such “well-known” places are increasingly rare. If they exist at all.

There is a growing need to do safety differently.

People and organisations need to adapt. Traditional approaches to safety are not well suited for this. They are, to a large extent built on linear ideas – ideas about tighter control of work and processes, of removing creativity and autonomy, of telling people to comply.

The purpose of this website is to celebrate new and different approaches to safety. More specifically it is about exploring approaches that boost the capacity of people and organisations to handle their activities successfully, under varying conditions. Some of the analytical leverage will come from theoretical advancements in fields such as adaptive safety, resilience engineering, complexity theory, and human factors. But the ambition is to stay close to pragmatic advancements that actually deliver safety differently, in action.

Also, bashing safety initiatives that are based on constraining people’s capacities will be a frequent ingredient.

I had the opportunity to share my thoughts and ideas on what  “safety differently” is. I have expanded on these and added a few reference links.

Safety differently is not blindly following a stepping stone path but taking the time to turn over each stone and challenging why is the stone here in the first place, what was the intent, is it still valid and useful.

During engineering school I was taught to ignore friction as a force and use first order approximate linear models. When measuring distance, ignore the curvature of earth and treat as a straight line.

Linear Reductionist SafetyMuch of this thinking has translated into safety. Examples include Heinrich’s safety triangle, Domino theory, and Reason’s Swiss Cheese model. Safety differently is acknowledging that the real world is non-linear, Pareto distributed. It’s understanding how complexity in terms of diversity, coherence, tipping points, strange attractors, emergence, Black Swans impacts safety practices. Safety differently is no longer seeing safety as a product or a service but as an emergent property in a complex adaptive system.

BigHistoryProject GoldilocksWe can learn a lot from the amazing Big History Project. Dave Christian talks about the Goldilocks conditions that lead to increased complexity. Safety differently looks at what are these “just right” conditions that allow safety to emerge.

Because safety is an emergent property, that means we can’t create, control, nor manage a safety culture. Safety differently researches modulators that might influence and shape a present culture. Modulators are different that Drivers. Drivers are cause & effect oriented (if we do this, we get that) whereas Modulators are catalysts we deliberately place into a system but cannot predict what will happen. It sounds a bit crazy but it’s done all the time. Think of a restaurant experimenting with a new menu item. The chef doesn’t actually know if it will be a success. Even if it fails, he will learn more about clientele food preferences. We call these safe-to-fail experiments where the knowledge gained is more than the cost of the probe.

Cynefin Framework
Cynefin Framework: Follow the Green line to Best Practices

Safety differently views best practices as a space where ideas go to die. It’s the final end point of a linear line. Why die? Because there is no feedback loop that allows change to occur. Why no feedback loop? Because by definition it’s best practices and no improvement is necessary! Unfortunately, a Goldilocks condition arises we call complacency and the phenomenon is a drift into failure.

Safety differently means using different strategies when navigating complexity. A good example is the Saudi culture that Ron mentioned re taking a roundabout method to get to a destination. This is a complexity strategy than John Kay wrote about in this book Obliquity [Click here to see his TEDx talk].

Robustness+Resilience

Safety differently means deploying abductive rather than traditional deductive and inductive reasoning.

Safety differently means adaptation + exaptation as well as robustness + resilience.

Heinrich Revisited

Safety differently is not dismissing the past. We honour it by keeping practices that continue to serve well. But we will abandon approaches, methods, and tools that have been now proven to be false, myths, fallacies by complexity science, cognitive science, and other holistic research.

 

Accident Prone

In the early 1900s, humans were considered accident prone; Management took actions to keep people from mishandling machines and upsetting efficiency. Today, we know differently: The purpose of machines is to augment human intelligence and release people from non-thinking labour roles.

Do you lead from the Past or lead to the Future?

Ahhah

Recently Loren Murray, Head of Safety for Pacific Brands in Australia penned a thought provoking blog on the default future, a concept from the book  ‘The Three Laws of Performance’. I came across the book a few years ago and digested it from a leader effectiveness standpoint. Loren does a nice job applying it to a safety perspective.

“During my career I noticed that safety professionals (and this included myself) have a familiar box of tricks. We complete risk assessments, enshrine what we learn into a procedure or SOP, train on it, set rules and consequences, ‘consult’ via toolboxes or committees and then observe or audit.

When something untoward happens we stop, reflect and somehow end up with our hands back in the same box of tricks writing more procedures, delivering more training (mostly on what people already know), complete more audits and ensure the rules are better enforced….harder, meaner, faster. The default future described in The Three Laws of Performance looked a lot like what I just described!

What is the default future? We like to think our future is untold, that whatever we envision for our future can happen….However for most of us and the organisations we work for, this isn’t the case. To illustrate. You get bitten by a dog when you are a child. You decide dogs are unsafe. You become an adult, have kids and they want a dog. Because of your experiences in the past it is unlikely you will get a dog for your kids. The future isn’t new or untold it’s more of the past. Or in a phrase, the past becomes our future. This is the ‘default future’.

Take a moment to consider this. It’s pretty powerful stuff with implications personally and organisationally. What you decide in the past will ultimately become your future.

How does this affect how we practice safety? Consider our trusty box of tricks. I spent years learning the irrefutable logic of things like the safety triangle and iceberg theory. How many times have I heard about DuPont’s safety journey? Or the powerful imagery of zero harm. The undeniable importance of ‘strong and visible’ leadership (whatever that means) which breeds catch phrases like safety is ‘priority number one’.

These views are the ‘agreement reality’ of my profession. These agreements have been in place for decades. I learnt them at school, they were confirmed by my mentors, and given credibility by our regulators and schooling system. Some of the most important companies in Australia espouse it, our academics teach it, students devote years to learning it, workers expect it…. Our collective safety PAST is really powerful.”

 
Loren’s blog caused me to reflect on the 3 laws and how they might be applied in a complexity-based safety approach. Let’s see how they can help us learn so that we don’t keep on repeating the past.
First Law of Performance
“How people perform correlates to how situations occur to them.”
It’s pretty clear that the paradigms which dominate current safety thinking view people as error prone or create problems working in idealistic technological systems, structures, and processes. Perplexed managers get into a “fix-it” mode by recalling what worked in the past and assume that is the solution going forward. It’s being mindful of perception blindness and opening both eyes.
Second Law of Performance
“How a situation occurs arises in language.”

As evidence-based safety analysts, we need to hear the language and capture the conversations. One way is the Narrative approach where data is collected in the form of stories. We may even go beyond words and collect pictures, voice recordings, water cooler snippets, grapevine rumours, etc. When we see everything as a collective, we can discover themes and patterns emerging. These findings could be the keys that lead to an “invented” future.

Third Law of Performance
“Future-based language transforms how situations occur to people.”

Here are some possible yet practical shifts you can start with right now:

  • Let’s talk less about inspecting to catch people doing the wrong things and talk more about Safety-II; i.e., focusing on doing what’s right.
  • Let’s talk less about work-as-imagined deviations and more about work-as-done adjustments; i.e., less blaming and more appreciating and learning how people adjust performance when faced with varying, unexpected conditions.
  • Let’s talk less about past accident statistics and injury reporting systems and talk more about sensing networks that trigger anticipatory awareness of non-predictable negative events.
  • Let’s talk less about some idealistic Future state vision we hope to achieve linearly in a few years and talk more about staying in the Present, doing more proactive listening, and responding to the patterns that emerge in the Now.
  • And one more…let’s talk less about being reductionists (breaking down a social-technical system into its parts) and talk more about being holistic and understanding how parts (human, machines, ideas, etc.) relate, interact, and adapt together in a complex work environment.

The “invented” future conceivably may be one that is unknowable and unimaginable today but will emerge with future-based conversations.

What are you doing as a leader today? Leading workers to the default future or leading them to an invented Future?

Click here to read Loren’s entire blog posting.

When thinking of Safety, think of coffee aroma

CoffeeSafety has always been a hard sell to management and to front-line workers because, as Karl Weick put forward, Safety is a dynamic non-event. Non-events are taken for granted. When people see nothing, they presume that nothing is happening and that nothing will continue to happen if they continue to act as before.

I’m now looking at Safety from a complexity science perspective as something that emerges when system agents interact. An example is aroma emerging when hot water interacts with dry coffee grinds. Emergence is a real world phenomenon that System Thinking does not address.

Safety-I and Safety-II do not create safety but provide the conditions for Safety to dynamically emerge. But as a non-event, it’s invisible and people see nothing. Just as safety can emerge, so can danger as an invisible non-event. What we see is failure (e.g., accident, injury, fatality) when the tipping point is reached. We can also reach a tipping point when we do much of a good thing. Safety rules are valuable but if a worker is overwhelmed by too many, danger in terms of confusion, distraction can emerge.

I see great promise in advancing the Safety-II paradigm to understand what are the right things people should be doing under varying conditions to enable safety to emerge.

For further insights into Safety-II, I suggest reading Steven Shorrock’s posting What Safety-II isn’t on Safetydifferently.com. Below are my additional comments under each point made by Steven with a tie to complexity science. Thanks, Steven.

Safety-II isn’t about looking only at success or the positive
Looking at the whole distribution and all possible outcomes means recognizing there is a linear Gaussian and a non-linear Pareto world. The latter is where Black Swans and natural disasters unexpectedly emerge.

Safety-II isn’t a fad
Not all Safety-I foundations are based on science. As Fred Manuelle has proven, Heinrich’s Law is a myth. John Burnham’s book Accident Prone offers a historical rise and fall of the accident proneness concept. We could call them fads but it’s difficult to since they have been blindly accepted for so long.

This year marks the 30th anniversary of the Santa Fe Institute where Complexity science was born. At the May 2012 Resilience Lab I attended, Erik Hollnagel and Richard Cook introduced the RMLA elements of Resilience engineering: Respond, Monitor, Learn, Anticipate. They fit with Cognitive-Edge’s complexity view of Resilience: Fast recovery (R), Rapid exploitation (M,L), Early detection (A). This alignment had led to one way to operationalize Safety-II.

Safety-II isn’t ‘just theory’
As a pragmatist, I tend to not use the word “theory” in my conversations. Praxis is more important to me instead of spewing theoretical ideas. When dealing with complexity, the traditional Scientific Method doesn’t work. It’s not deductive nor inductive reasoning but abductive. This is the logic of hunches based on past experiences  and making sense of the real world.

Safety-II isn’t the end of Safety-I
The focus of Safety-I is on robust rules, processes, systems, equipment, materials, etc. to prevent a failure from occurring. Nothing wrong with that. Safety-II asks what can we do to recover when failure does occur plus what can we do to anticipate when failure might happen.

Resilience can be more than just bouncing back. Why return to the same place only to be hit again? Early exploitation means finding a better place to bounce to. We call it “swarming” or Serendipity if an opportunity unexpectedly arises.

Safety-II isn’t about ‘best practice’
“Best” practice does exist but only in the Obvious domain of the Cynefin Framework. It’s the domain of intuition and the Thinking Fast in Daniel Kahneman’s book Thinking Fast and Slow. What’s the caveat with best practices? There’s no feedback loop. So people just carry on as they did before.  Some best practices become good habits. On the other hand, danger can emerge from the baddies and one will drift into failure.

Safety-II and Resilience is about catching yourself before drifting into failure. Being alert to detect weak signals (e.g., surprising behaviours, strange noises, unsettling rumours) and having physical systems and people networks in place to trigger anticipatory awareness.

Safety-II isn’t what ‘we already do’
“Oh, yes, we already do that!” is typically expressed by an expert. It might be a company’s line manager or a safety professional. There’s minimal value challenging the response.  You could execute an “expert entrainment breaking” strategy. The preferred alternative? Follow what John Kay describes in his book Obliquity: Why Our Goals are Best Achieved Indirectly.

Don’t even start by saying “Safety-II”. Begin by gathering stories and making sense of how things get done and why things are done a particular way. Note the stories about doing things the right way. Chances are pretty high most stories will be around Safety-I. There’s your data, your evidence that either validates or disproves “we already do”. Tough for an expert to refute.

Safety-II isn’t ‘them and us’
It’s not them/us, nor either/or, but both/and.  Safety-I+Safety-II. It’s Robustness + Resilience together.  We want to analyze all of the data available, when things go wrong and when things go right.

The evolution of safety can be characterized by a series of overlapping life cycle paradigms. The first paradigm was Scientific Management followed by the rise of Systems Thinking in the 1980s. Today Cognition & Complexity are at the forefront. By honouring the Past, we learn in the Present. We keep the best things from the previous paradigms and let go of the proven myths and fallacies.

Safety-II isn’t just about safety
Drinking a cup of coffee should be a total experience, not just tasting of the liquid. It includes smelling the aroma, seeing the Barista’s carefully crafted cream design, hearing the first slurp (okay, I confess.) Safety should also be a total experience.

Safety can emerge from efficient as well as effective conditions.  Experienced workers know that a well-oiled, smoothly running machine is low risk and safe. However, they constantly monitor by watching gauges, listening for strange noises, and so on. These are efficient conditions – known minimums, maximums, and optimums that enable safety to emerge. We do things right.

When conditions involve unknowns, unknowables, and unimaginables, the shift is to effectiveness. We do the right things. But what are these right things?

It’s about being in the emerging Present and not worrying about some distant idealistic Future. It’s about engaging the entire workforce (i.e., wisdom of crowds) so no hard selling or buying-in is necessary.  It’s about introducing catalysts to reveal new work patterns.  It’s about conducting small “safe-to-fail” experiments to  shift the safety culture. It’s about the quick implementation of safety solutions that people want now.

Signing off and heading to Starbucks.

My story: A day with Sidney Dekker

A story is an accounting of an event as experienced through the eyes, ears, cognitive biases, and paradigms of one person. This is my story about attending the Day with Sidney Dekker ’ at the Vancouver Convention Centre on Friday September 19 2014.  The seminar was sponsored by the Lower Mainland chapter of CSSE (Canadian Society of Safety Engineering). I initially heard about the seminar through my associations with RHLN and the HFCoP.

I was aware that Sidney Dekker (SD) uses very few visual slides and provides no handouts. So I came fully prepared to take copious notes with my trusty iPad Air. This not a play-by-play (or a blow-by-blow if you take in SD’s strong opinions on HR, smart managers bent on controlling dumb workers, etc.)  I’ve shifted content around to align my thinking and work I’ve done co-developing our Resilient Safety Culture course with Cognitive-Edge. My comments are in square brackets and italics.

SD: Goal today is to teach you think about intractable issues in safety
SD: Don’t believe a word I say; indulge me today then go find out for yourself
SD: We care when bad people make mistakes but we should care more when good people make mistakes and why they do

Where are we today in safety thinking?


Here is a recent article  announcing a new safety breakthrough [http://bit.ly/1mVg19a]
LIJ medical center has implemented a safety solution that will be all to end all
A remote video auditing (RVA) in a surgical room developed by Arrowsight [http://bit.ly/1mVh2yf]

RVA monitors status every 2 minutes for tools left in patients, OR team mistakes
Patient Safety improved to a near perfect score
Culture of safety and trust is palpable among the surgical team
Real-time feedback on a  smartphone
RVA is based on the “bad apple” theory and model and an underlying assumption there is a general lack of vigilence
Question: Who looks at the video?
Ans: Independent auditor who will cost money. Trade-off tension created between improving safety or keeping costs down
Assumption: He who watches knows best so OR team members are the losers
Audience question: What if the RVA devices weren’t physically installed but just announced; strategy is to put in people’s minds that someone is watching to avoid complacency
SD: have not found any empirical evidence that being watched improves safety. But it does change behaviour to look good for the camera
Audience question: Could the real purpose of the RVA be to protect the hospital’s ass during litigation cases?
SD: Very good point! [safety, cost, litigation form a SenseMaker™ triad to attach meaning to a story]
One possible RVA benefit: Coaching & Learning
If the video watchers are the performers, then feedback is useful for learning purposes
Airline pilots can ask to replay the data of a landing but only do so on the understanding there are serious protections in place – no punitive action can be a consequence of reviewing data
Conclusion: Solutions like RVA give the illusion of perfect resolution
 
How did we historically arrive at the way we look at safety and risk today?
[Reference SD’s latest book released June 2014:  “Safety Differently” which is an update of “Ten Questions About Human Error: A New View of Human Factors and System Safety”]
[SD’s safety timeline aligns the S-curve diagram developed by Dave Snowden http://gswong.com/?page_id=11]

Late Victorian Era

Beginning of measurement (Germany, UK) to makes things visible
Discover industrial revolution kills a lot of people, including children
Growing concern with enormous injury and fatality problem
Scholars begin to look at models
1905 Rockwell: pure accidents (events that cannot be anticipated) seldom happen; someone has blundered or reversed a law of nature
Eric Farmer: carelessness or lack of attention of the worker
Oxford Human Factor definition: physical, mental, or moral shortcoming of the individual that predisposes the person

We still promote this archaic view today in programs like Hearts & Mind [how Shell and the Energy Institute promote world class HSE]
campaigns with posters, banners, slogans
FAITH-BASED safety approach vs. science-based

In 2014, can’t talk about physical handicaps but are allowed to for mental and moral (Hearts and Minds) human deficiencies
SD: I find it offensive to be treated as an infantile

1911 Frederick Taylor introduced Scientific Management to balance the production of pigs, cattle
Frank Gilbreth conducted time and motion studies
Problem isn’t the individual but planning, organizing, and managing
Scientific method is to decompose into parts and find 1 best solution [also known as Linear Reductionism]
Need to stay with 1 best method (LIJ’s RVA follows this 1911 edict)
Focus on the non-compliant individual using line supervision to manage dumb workers
Do not let people to work heuristically [rule of thumb] but adamantly adhere to the 1 best method
We are still following the Tayloristic approach
Example: Safety culture quote in 2000: “It is generally acknowledged that human frailty lies behind the majority of our accidents. Although many of these have been anticipated by rules, procedures, some people don’t do what they are supposed to do. They are circumventing the multiple defences that management has created.”

It’s no longer just a Newton-Cartesian world

        Closed system, no external forces that impinge on the unit
        Linear cause & effect relationships exist
        Predictable, stable, repeatable work environment
        Checklists, procedures are okay
        Compliance with 1 best method is acceptable

Now we know the world is complex, full of perturbations, and not a closed system 

[Science-based thinking has led to complex adaptive systems (CAS) http://gswong.com/?wpfb_dl=20]

SD’s story as an airline pilot
Place a paper cup on the flaps (resilience vs. non-compliance) because resilience is needed to finish the design of the aircraft by the operators
Alway a gap between Work-as-imagined vs Work-as-done [connects with Erik Hollnagel’s Safety-II]
James Reason calls the gap a non-compliance violation; we can also call that gap Resilience – people have to adapt to the local conditions using their experience, knowledge, judgement

SD: We pay people more money who have experience. Why?  Because the 1 best method may not work
There is no checklist to follow
Taylorism is limited and can’t go beyond standardization

Audience question: Bathtub curve model for accidents – more accidents involving younger and older workers. Why does this occur?
SD: Younger workers are beaten to comply but often are not told why so lack understanding
Gen Y doesn’t believe in authority and sources of knowledge (prefer to ask a crowd, not an individual)
SD: Older worker research suggests expertise doesn’t create safety awareness. They know how close they can come to the margin but if they go over the line, slower to act. [links with Richard Cook’s Going Solid / Margin of Manoeuvre concept http://gswong.com/?wpfb_dl=18]

This is not complacency (a motivational issue) but an attenuation towards risk. Also may not be aware the margins have moved (example: in electric utility work, wood cross-arm materials have changed). Unlearning, teaching the old dog new tricks is difficult.[Master builder/Apprenticeship model: While effective for passing on tacit knowledge, danger lies in old guys becoming stale and passing on myths and old paradigms]

1920s & 1930s – advent of Technology & animation of Taylorism

World is fixed, technology will solve the problems of the world
Focus on the person using rewards and punishment, little understanding of deep ethical implications
People just need to conform to technology, machines, devices [think of Charlie Chaplin’s Modern Times movie]

Today: Behaviour-based Safety (BBS) programs still follow this paradigm re controlling human behaviour
Example: mandatory drug testing policy. What does this do to an organization?
In a warehouse, worker is made to wear a different coloured vest (a dunce cap)
“You are the sucker who lost this month’s safety bonus!” What happens to trust, bonding?

Accident Proneness theory (UK, Germany 1925)

Thesis is based on data and similar to Bad Apply theory
[read John Burnham’s book http://amzn.to/1mV63Vn ]
Data showed some people more involved in accidents than others (eg. 25% cause 55%)
Idea was to target these individuals
Aligned with the eugenic thinking in the 1920s (Ghost of the time/spirit/zeitgeist)
        Identify who is fit and weed out (exterminate) the unfit [think Nazism]
Theory development carried on up the WWII
Question: what is the fundamental statistical flaw with this theory?
Answer: We all do the same kind of work therefore we all have the same probability of incurring an accident
Essentially comparing apples with oranges
We know better – individual differences exist in risk tolerance
SD: current debate in medical journal: data shows 3% of surgeons causing majority of deaths
Similar article in UK 20% causing 80%
So, should we get rid of these accident-prone surgeons?
No, because the 3% may include the docs who are willing to take the risk to try something new to save a life

WWII Technologies

Nuclear, radar, rocketry, computers
Created  a host of new complexities, new usability issues

Example: Improvements to the B17 bomber
Hydraulic gear and flap technology introduced
However, belly-flop landings happened
Presumed cause was dumb pilots who required more training, checklists, and punishment
Would like to remove these reckless accident-prone pilots damaging the planes
However, pilots are in short supply plus give them a break – they have been shot at by the enemy trying to kill them
Shifted focus from human failure to design flaws. Why do 2 switches in dashboard look the same?
In 1943 redesigned switch to prevent bellyflopping
Message: Human error is systemically connected and predictability so to the features of tools and products that people use. Bad design induces errors. Better to intervene in the context of people’s work.

Safety thinking begins to change: What happens in the head is acutely important.
Now interested in cognitive psychology [intuition, reasoning, decision-making] not just behavioural psychology [what can be observed]
Today: Just Culture policy (human error, at-risk behaviour, reckless behaviour)

After lunch exercise: Greek airport 1770m long

Perceived problem: breaking EU rules by taxiing too close to the road
White line – displaced threshold – don’t land before this line
Need to rapidly taxi back to the terminal to unload for productivity reasons (plane on-the-ground costs money)
Vehicular traffic light is not synced with plane landing (i.e., random event)

Question: How do you stop non-compliant behaviour if you are the regulator? How might you mitigate the risk?
SD: Select a solution approach with choices including Taylorism, Just Culture, Safety by Design
Several solutions heard from the audience but no one-best

SD: Conformity and compliance rules are not the answer, human judgment required
Situation is constantly changing – Tarmac gets hot in afternoon; air rises so may need to come in at a lower angle. At evening when cooler, approach angle will change
[Reinforces the nature of a CAS where agents like weather can impact  solutions and create emergent, unexpected consequences]
SD concern: Normalization of deviance – continual squeezing of the boundaries and gradual erosion of safety margins
They’re getting away with it but eventually there will be fatal crash
[reminds me of the frog that’s content to sit in the pot of water as the temperature is slowly increased. The frog doesn’t realize it’s slowly cooking to death until it’s too late}
[Discussed in SD’s Drift into Failure book and http://gswong.com/?p=754]
Back to the historical timeline…

1980s Systems Thinking

James Reason’s Swiss Cheese Model undermines our safety efforts
        Put in layers of defence which reinforces the 1940s thinking
        Smarter managers to protect the dumb workers
        Cause and effect linear model of safety
Example: 2003 Columbia space shuttle re-entry
        Normal work was done, not people screwing up (foam maintenance)
        There were no holes according to the Swiss Cheese Model
        Emergence: Piece of insulation foam broke off damaging the wing
Example: 1988 Piper Alpha oil rig
        Prior to accident, recognized as the most outstanding safe and productive oil rig
        Explosion due to leaking gas killing 167
        “I knew everything was right because I never got a report anything was wrong”
       Looking for the holes in the Swiss Cheese Model again
       Delusion of being safe due to accident-free record

Many people carry an idealistic image of safety: a world without harm, pain, suffering
Setting a Zero Harm goal is counter-productive as it suppresses reporting and incents manipulation of the numbers to look good

Abraham Wald example
Question: Where should we put the armour on a WWII bomber?Wrong analysis: Let’s focus on the holes and put armour there to cover them up
Right analysis: Since the plane made it back, there’s no need for armour on the holes!
Safety implication: Holes represent near-miss incidents (bullets that fortunately didn’t down the plane). We shouldn’t be covering the holes but learning from them

Safety management system (SMS)
Don’t rest on your laurels thinking you finally figured it out with a comprehensive SMS
Australian tunnelling example:
Young guy dies working near an airport
There were previous incidents with the contractor but no connection was made
Was doing normal work but decapitated finishing the design
An SMS will never pick this up

Don’t be led astray by the Decoy phenomenon
Only look at what we can count in normal work and ignore other signals
Example: Heinrich triangle – if we place our attention on the little incidents, then we will avoid the big ones (LTA, fatality) [now viewed as a myth like Accident Prone theory]
Some accidents are unavoidable  – Barry Turner 1998 [Man-made Disasters]
Example: Lexington accident [2006 Comair Flight 5191] when both technology and organization failed

Complexity has created huge, intractable problems
In a world of complexity, we can kill people without precursory events
[If we stay with the Swiss Cheese Model idea, then Complexity would see the holes on a layer dynamically moving, appearing, disappearing and layers spinning randomly and melting together to form new holes that were unknowable and unimaginable]

2014

Safety has become a bureaucratic accountability rather than an ethical responsibility
Amount of fraud is mounting as we continue measuring and rewarding the absence of negative incidents
Example: workers killed onsite are flown back home in a private jet to cover up and hide accidents
If we can be innovative and creative to hide injuries and fatalities, why can’t we use novel ways to think about safety differently?
Sense of injustice on the head of the little guy

Advances in Safety by Design
“You’re not lifting properly” compared “the job isn’t designed properly”
An accident is a free lesson, learning opportunity, not a HR performance problem
Singapore example: Green city which to grow must go vertically up. Plants grow on all floors of a tall building. How to maintain?
One approach is to punish the worker if accident occurs
Safety by Design solution is to design wall panels that rotate to maintain plants; no fall equipment needed
You can swat the mosquito but better to drain the swamp

Why can’t we solve today’s problems the same way we solved them back in the early 1900s?

What was valued in the Victorian Era

  1. People are a problem to control
  2. We control through intervention at the level of their behaviour
  3. We define safety as an absence of the Negative

Complexity requires a shift in  what we value today

  1. People are a solution, a resource
  2. Intervene in the context and condition of their work
  3. Instead of measuring and counting negative events, think in terms of the presence of positive things – opportunities, new discoveries, challenges of old ideas

What are the deliverables we should aim for today?

Stop doing work inspections that treat workers like children
It’s arrogant believing that an inspector knows better
Better onsite visit: Tell me about your work. What’s dodgy about your job?
Intervene the job, not the individual’s behaviour.
Collect authentic stories.
[reinforces the practice of Narrative research http://gswong.com/?page_id=319]

Regulators need to shift their deliverables from engaging reactively (getting involved after the accident has occurred), looking for root causes, and formulating policy constraints
Causes are not things found objectively; causes are constructed by the human mind [and therefore subject to cognitive bias]
Regulators should be proactively co-evolving the system [CAS]
Stop producing accident investigation reports closing with useless recommendations to coach and gain commitment
Reference SD’s book: Field Guide to Investigating accidents – what you look for you will find

Question: Where do we place armour on a WWII bomber if we don’t patch the holes?
Answer: where we can build resilience by enabling the plane to take a few hits and still make it back home
[relates to the perspective of resilience in terms of the Cynefin Framework http://gswong.com/?page_id=21]

Resilience Engineering deliverables

  1. Do we keep risk awareness alive? Debrief and more debrief on the mental model? Even if things seem to be under control? Who leads the debriefing? Did the supervisor or foreman do a recon before the job starts to lessen surprise? [assessing the situation in the Cynefin Framework Disorder domain]
  2. Count the amount of rework done – can be perceived as a leading indicator although it really lags since initial work had been performed
  3. Create ways for bad news to be communicated without penalty. Stat: 83% of plane accidents occur when pilots are flying and 17% when co-pilots are.  Institute the courage to speak up and say no. Stop bullying to maintain silience. It is a measure of Trust and empowers our people. Develop other ways such as role playing simulations, rotation of managers which identify normalization of deviance (“We may do that here but we don’t do that over there”)
  4. Count the number of fresh perspectives and opinions that are allowed to be aired. Count the number of so-called best practice rules that are intelligently challenged. [purpose of gathering stories in a Human Sensor Network http://gswong.com/?page_id=19]
  5. Count number or % of time re human-human relationships (not formal inspections) but honest and open conversations that are org hierarchy-free.

Paradigm Shift:

Spend less time and effort on things that go wrong [Safety-I]
Invest more effort on things that go right which is most of the time [Safety-II]

Final message:

Don’t do safety to satisfy Bureaucratic accountability
Do safety for Ethical responsibility reasons

There were over 100 in attendance so theoretically there are over 100 stories that could be told about the day. Some will be similar to mine and my mind is open to accepting some will be quite different (what the heck was Gary smoking?)  But as we know, the key to understanding complexity is Diversity –  the more stories we seek and allow to be heard, the better representation of the real world we have.

A pathetic safety ritual endlessly recycled

Dave Johnson is Associate Publisher and Chief Editor of ISHN, a monthly trade publication targeting key safety, health and industrial hygiene buying influencers at manufacturing facilities of all sizes.  In his July 09 blog (reprinted below), he laments how the C-suite continues to take a reactive rather than proactive approach to safety. Here’s a reposting of my comments.

Let’s help the CEOs change the pathetic ritual

Dave: Your last paragraph says it all. We need to change the ritual. The question is not why or what, but how. One way is to threaten CEOs with huge personal fines or jail time. For instance, in New Zealand a new Health and Safety at Work Act is anticipated to be passed in 2014. The new law will frame duties around a “person conducting a business or undertaking” or “PCBU”. The Bill as currently drafted does not neatly define “PCBU” but the concept would appear to cover employers, principals, directors, even suppliers; that is, people at the top. A tiered penalty regime under the new Act could see a maximum penalty of $3 million for a body corporate and $600,000 and/or 5 years’ imprisonment for an individual. Thrown into jail due to unsafe behaviour by a contractor’s new employee whom you’ve never met would certainly get your attention.

But we know the pattern: Initially CEOs will order more compliance training, inspections, more safety rules. Checkers will be checking checkers. After a few months of no injuries, everyone will relax and as Sidney Dekker cautioned, complacency will set in and the organization will drift to failure. Another way is to provide CEOs with early detection tools with real-time capability. Too often we read comments in an accident report like “I felt something ominous was about to happen” or “I told them but nobody seemed to listen.”

CEOs need to be one of the first, not the last, to hear about a potential hazard identified but not being addressed. We now have the technology to allow an organization to collect stories from the front line and immediately convert them to data points which can be visually displayed. Let’s give CEOs and higher-ups the ability to walk the talk. In addition, we apply a complexity-based approach where traditional RCA investigative methods are limited. Specifically, we need to go “below the water line” when dealing with safety culture issues to understand the why rituals persist. 

Gary Wong
July 16, 2014

G.M.’s CEO is the latest executive to see the light

By Dave Johnson July 9, 2014

Wednesday, June 11, 2014, at the bottom right-hand corner of the section “Business Day” in The New York Times, is a boxed photograph of General Motors’ chief executive Mary T. Barra. The headline: “G.M. Chief Pledges A Commitment to Safety.”

Nothing against Ms. Barra. I’m sure she is sincere and determined in making her pledge. But I just shook my head when I saw this little “sidebar” box and the headline. Once again, we are treated to a CEO committing to safety after disaster strikes, innocent people are killed (so far G.M. has tied 13 deaths and 54 accidents to the defective ignition switch), and a corporation’s reputation is dragged through the media mud. The caption of Ms. Barra’s pic says it all: “…Mary T. Barra told shareholders that the company was making major changes after an investigation of its recall of defective small cars.”

Why do the commitments, the pledges and the changes come down from on high almost invariably after the fact?

You can talk all you want about the need to be proactive about safety, and safety experts have done just that for 20 or 30 or more years. Where has it gotten us, or more precisely, what impact has it had on the corporate world?

Talk all you want
Talk all you want about senior leaders of corporations needing to take an active leadership role in safety. Again, safety experts have lectured and written articles and books about safety leadership for decades. Sorry, but I can’t conjure the picture of most execs reading safety periodical articles and books. I know top organization leaders have stressful jobs with all sorts of pressures and competing demands. But I have a hard time picturing a CEO carving out reading time for a safety book in the evening. Indeed a few exist; former Alcoa CEO Paul O’Neill is the shining example. But they are the exceptions that prove the rule. The National Safety Council’s Campbell Institute of world class safety organizations and CEOs who “get it” are the exceptions, too, I’d assert.

And what is the rule? As a rule, proven again and again ad nauseam, top leaders of large corporations only really get into safety when they’re forced into a reactive mode. For the sake of share price and investor confidence, they speak out to clean up a reputational mess brought about by a widely publicized safety tragedy. Two space shuttles explode. Refineries blow up. Mines cave in. The incident doesn’t have to involve multiple fatalities and damning press coverage. I’ve talked with and listen to more than one plant manager or senior organization leader forced to make that terrible phone call to the family of a worker killed on the job, and who attended the funeral. The same declaration is stressed time and again: “Never again. Never again am I going to be put in the position of going through that emotional trauma. Business school never prepared me for that.”

“In her speech to shareholders, Ms. Barra apologized again to accident victims and their families, and vowed to improve the company’s commitment to safety,” reported The New York Times. “Nothing is more important than the safety of our customers,” she said. “Absolutely nothing.”

Oh really? What about the safety of G.M.’s workers? Oh yes, it’s customers who drive sales and profits, not line workers. This is cold business reality. Who did G.M.’s CEO want to get her safety message across to? She spoke at G.M.’s annual shareholder meeting in Detroit. Shareholders’ confidence needed shoring up. So you have the tough talk, the very infrequent public talk, about safety.

Preaching to the choir
I’ve just returned from the American Society of Safety Engineers annual professional development conference in Orlando. There was a raft of talks on safety leadership, what senior leaders can and should do to get actively involved in safety. There were presentations on the competitive edge safety can give companies. If an operation is run safely, there are fewer absences, better morale, good teamwork, workers watching out for each other, cohesiveness, strong productivity and quality and brand reputations. The classic counter-argument to the business case was also made: safety is an ethical and moral imperative, pure and simple.

But who’s listening to this sound advice and so-called thought leadership? As NIOSH Director Dr. John Howard pointed out in his talk, the ASSE audience, as with any safety conference audience, consists of the true believers who need no convincing. How many MBAs are in the audience?

Too often the moral high ground is swamped by the short-term, quarter-by-quarter financials that CEOs live or die by. Chalk it up to human nature, perhaps. Superior safety performance, as BST’s CEO Colin Duncan said at ASSE, results in nil outcomes. Nothing happens. CEOs are not educated to give thought and energy to outcomes that amount to nothing. So safety is invisible on corner office radar screens until a shock outcome does surface. Then come the regrets, the “if only I had known,” the internal investigation, the blunt, critical findings, the mea culpas, the “never again,” the pledge, the commitment, the vow, the tough talk.

There’s that saying, “Those who do not learn from history are bound to repeat it.” Sadly, and to me infuriatingly, a long history of safety tragedies has not proven to be much of a learning experience for many corporate leaders. “Ah, that won’t happen to us. Our (injury) numbers are far above average.” Still, you won’t have to wait long for the next safety apology to come out of mahogany row. It’s a pathetic ritual endlessly recycled.