Tag Archives: Narrative

Black Elephants in our Safety Systems

COVID-19 is a black elephant. A black elephant is a cross between a black swan and the proverbial elephant in the room. The black elephant is a problem that is actually visible to everyone, but no one wants to deal with it, and so they pretend it is not there. When it blows up as a problem, we all feign surprise and shock, behaving as if it were a black swan [1].

Nicholas Nassim Taleb popularized the black swan metaphor to describe an event that is rare, unexpected, and has a large negative, game-changing impact. COVID-19 is an infectious disease that waited for the right conditions to emerge. Like an accident just waiting to happen. It reminds me of Todd’s Conklin’s statement: “Workers don’t cause failures. Workers trigger latent conditions that lie dormant in organizations waiting for this specific moment in time.”

Taleb also noted that a black swan event is often inappropriately rationalized after the fact with the benefit of hindsight. This should ring a bell for those with accident investigation experience. It’s the counterfactual argument when work-as-imagined is compared to work-as-done. What’s ignored is the normal variability adjustments the victim had to make due to unexpected changes in the environment.

The emergence of COVID-19 is not a black swan. We shouldn’t have been surprised but we were. There have been 11 pandemics from the Plague of Justinian (541 – 750 AD) to Ebola (2014-2016).[2] In 2015, Bill Gates warned all nations that we have invested very little in a system to stop an epidemic.[3] In the October 2019 Event 201 global pandemic exercise, a call to action was outlined in seven recommendations.[4] Some countries have acted while others have chosen to ignore the peril for political-economic reasons deemed higher priority.

Those that acted installed seemingly robust fail-safe disaster response systems. Scarred by the SARS epidemic that erupted in 2002, China thought they had an airtight response process free from political meddling in place. However, Wuhan local health bureaucrats not wanting to raise an alarm and cause embarrassment suppressed automatic reporting. This fear kept Beijing in the dark and delayed the response. In our words, their fail-safe system failed. [5]

The real surprise is finding out in a painful way how intricately connected we are globally. Our ground, air, water transportation systems made it easy for the COVID-19 disease to spread exponentially. “Going viral” has sickened and killed thousands, crumpled economies, and plunged societal life into a fearful limbo with no easily discernible end in sight. Tightly coupled supply chains we take for granted are disrupted. Westerners are feeling what “made in China” means when local store shelves remain empty. Everyone is having a firsthand excruciating experience of surviving in a complex adaptive system (CAS).

Every organization is a CAS. Every industry is a CAS. So is every state, country, nation. Civilization is a CAS. We are many complex adaptive systems entangled to form one mega CAS called planet Earth. That idea was reinforced seeing Blue Marble, the image of the Earth from Apollo 17. Boomers felt that when Disney showcased It’s a Small World at the 1964 World’s Fair. (Now that we’ve mentioned it, is the tune streaming in your head too? Sorry about that.)

In the spirit of Safety Differently, let’s ask our 3 fundamental questions: Why? What? How? and pose different, non-traditional responses.

WHY… will we face more Black Elephants?

The emphasis in a CAS is on relationships between agents. Besides humans, other agents are machines, events, and ideas. Relationships are typically non-linear, not if-then causal, strengthened or weakened by fast feedback interactions. The Butterfly effect is a small initial change which can yield a huge impact if a tipping point is reached. Non-linear relationships are exponential like the COVID-19 spread. Many relationships in the real world follow a Pareto distribution, a logarithmic power law. Catastrophes like Black Elephants are rare in frequency but huge in severity. These are also called outliers, as they lie outside the realm of regular expectations of the Gaussian world. So it’s not if they will happen but a matter of when.

The level of complexity increases exponentially every time a new relationship is formed. For humans it could be the simple act of friending on Facebook or accepting a LinkedIn invitation. Or a person you don’t know choosing to follow you on Twitter. Annoying outcomes from new connections are more spam emails and unsolicited ads. More disconcerting is computer programmed machines interacting with other smart parts sometimes in hidden ways. When algorithms collide, no one really knows what will happen. Flash market crashes. Non-ethical AI. Boeing 737 Max 8. Or in one hypothesis, the cutting down of rain forests which allowed once-contained diseases like COVID-19 to infect wild animals. Hmm, workers trigger latent conditions that lie dormant

Realistically organizations are unable to develop emergency plans for every disaster identified. Even if they had unlimited time and money, there’s no guarantee that the recovery solutions will be successful. And by definition, we can’t plan for unknowables and unimaginables.

WHAT…can we do Today?

The starting point is understanding the present situation of the CAS. The Cynefin Framework has been widely around the world in contexts as diverse as the boardrooms of international fashion houses, militaries, NGOs, and SWAT teams on city streets. For a brief explanation of the sense-making framework, check out Jennifer Garvey Berger’s YouTube video.

The above graphic maps the type of safety decisions made and actions executed in each Cynefin domain. No domain is better than any other. The body of knowledge that Safety-I has provided is clearly in the Obvious and Complicated domains. Much of Safety-II advancements reside in the Complicated domain as experts wrestle with new processes and tools. Whether these can be made easy for anyone to use and thus moved into the Obvious domain remains to be seen. A major accomplishment would be shifting the front-line worker mindset to include what goes right when planning a job.

Now let’s apply sense-making in our battle with COVID-19. Be aware this is a dynamic exercise with relationships, events, and ideas constantly changing or emerging. Also note that the Cynefin Framework is a work-in-progress. The Obvious domain is being renamed as the Clear domain.

Self-isolation is in the Clear domain; you haven’t been tested so avoid others as a precautionary measure. Self-quarantine means you have tested positive; your act is to monitor your conditions and respond as you get better or worse.

Conspiracy theorists are in the far corner of the Clear domain. They believe their ordered life mindset has been deliberately disrupted. Strongly held beliefs range from political party subterfuge to willingness to risk death to save the economy to blaming 5G.

At the time of this writing, experts have not agreed if the mandatory wearing of masks will help or hinder the COVID-19 battle. Two resistance movements are shown. Both reside in the Cynefin Complex domain near the Chaotic border. Not all Coronavirus challenges hoping to go viral have been positive. Licking toilet seats may have garnered lots of social media attention for the challenge creator. But it has plunged one follower into the Chaotic domain with his testing positive.[6] Some who attended parties with a feeling of invincibility have also fallen into the Chaotic domain.[7]

The Disorder domain is associated with confusion and not knowing what to do. Many myths, hoaxes, and fake news include expert quotes and references. When eventually exposed as fiction and not fact, they can up the level of personal frustration and anxiety.

One fact is that your overarching safety strategy hasn’t changed: strengthen Robustness + build Resilience. As this article is about surviving change, let’s focus our attention on 3 capabilities of a resilient organization.

With the COVID-19 battle still raging, the chances of doing a fast recovery and returning to the original operating point [A] are practically slim to none.

If we know that we have black elephants, then we have an early detection system in place [C]. Being caught with our pants down simply means the messenger doesn’t have enough trust and respect or the organization bureaucracy is too dominant and overbearing. Path [B] is shaping the “new normalcy” for the organization. This entails asking key questions and exploring options. Change Management specialist Peter Hadas has posed a set of questions:

  • If we suddenly had to double our capacity, could we do it?”
  • Are our systems well connected to suddenly handle a spike in capacity?”
  • If I had to scale back to just 20% of my employees that I would absolutely need to rebuild my company from scratch, who would that be?”
  • Of the remaining 80% of your staff, who has mission-critical information in their heads that is not written down anywhere?

In his blog Peter cites case studies where a downturn was perceived not as a calamity but an opportunity. In terms of safety, let’s ask:

  • What black elephants will be unleashed when our organization changes to the new normalcy?
  • What existing latent conditions will enable safety or danger to emerge due to a new tipping point in the new normalcy?
  • When we return to work or if we need new recruits, what will be different in our safety induction and orientation programs in the new normalcy?
  • What human-imposed constraints such as safety policies, standards, rules, procedures need to adapt in the new normalcy?

HOW…can we operationalize the What?

Top Management is under fire to demonstrate leadership and take charge in a time of crisis. First step: Stop the bleeding to get out of the Cynefin Chaotic domain. Now what? Craft an idealistic vision of the new normalcy? Direct subordinates to “make it so”? Well, this would be a Complicated domain approach. Since the future is uncertain and unpredictable, developing the new normalcy happens in the Cynefin Complex domain. Instead we manage the evolutionary potential of the Present and shape the new normalcy on a different scaffold. Classical economics? Evonomics? Doughnut Economics? How about Narrative economics?

One of the principles of Safety Differently is: Safety is not defined by the absence of accidents, but by the presence of capacity. Adaptive Safety means building adaptive capacity. When working in the Complex domain, one capacity is possessing useful heuristics to cope with uncertainty. Heuristics are simple, efficient rules we use to form judgments and made decisions. These are mental shortcuts developed on past successful experiences. The upside is they can be hard measures of performance if used correctly (i.e., not abstract and no gamification). The downside is that heuristics can blind us or may not work in novel situations. So don’t get locked into a trusty but rusty heuristic and be willing to adapt.

To operationalize the What, let’s apply 3 simple rules drilled into US soldiers to improve the chances of survival in war: Keep moving, stay in communication, and head to higher ground.

Keep moving

Okay, we’ve stopped the bleeding. We, however, can’t sit still hoping to wait until things settle out. Nor go into paralysis analysis looking at a multitude of options. We don’t want to be an easy target or prey for a competitive predator. Let’s speedily move into the Complex domain and head towards this new normalcy.

Before we move, we need a couple of tools – a compass and a map to plot our path [B]. As shown by Palladio’s Amber Compass, a complexity thinking compass is different. In our backpack will be tools to probe the system (mine detector?), launch experiments (flares?), and that map not only to plot our direction but monitor our progress.

In the military world, each soldier on the battlefield is equipped with a GPS device. This capacity enables the Command centre to monitor the movement of troops in real-time on a visual screen. How might we build a similar map of movement towards a new normalcy?

Stay in Communication

In the battle against COVID-19 to stop the bleeding, numerous organizations immediately enacted an age-old heuristic: terminate, layoff, stand down. This action is akin to removing soldiers from the battlefield for financial reasons. The policy is based on the traditional reductionistic paradigm of treating people as replaceable parts of a mechanistic system. It reinforces a Classical Management theory of treating humans as expenses not assets. (Suggestion: Organizations that state Our people are our greatest resource should update to our greatest expendable resource.)

In novel times like today, perhaps it calls for a different heuristic. Denmark as a nation CAS leader decided to “Freeze the Economy.”[8] The Danish government told private companies hit by the effects of the pandemic that it would pay 75% of their employees’ salaries to avoid mass layoffs. The idea is to enable companies to preserve their relationship with their workers. It’s going to be harder to have a strong recovery if companies have to spend time hiring back workers that have been fired. Other countries like Canada [9] and the US [10] are following Denmark’s lead and launching modified experiments. From a complexity perspective, this is a holistic paradigm where the whole is greater than the sum of its parts.

No matter what employee policy has been invoked, cutting payroll costs does not necessarily mean disengaging. Along the same lines, practicing social distancing does not mean social disconnecting. One can stay in communication. But let’s communicate differently. Ron Gantt wrote that it’s time for an anthropological approach. We agree. Let’s become physically-distanced storytellers and ethnographers.

Cognitive Edge is offering the adaptive capacity of SenseMaker® for COVID-19  to collect real-life stories from employees working remotely plus temporarily removed from the battlefield. It’s an opportunity to show empathy, sense wellbeing, and build early detection capability [C]. Most of all, we can use stories to generate our map of movement towards a new normalcy.

Head to higher ground

This is a story-generated 2D contour map from Safety Pulse. Each dot is a story that can be read by clicking on it. The red X marks a large cluster of undesirable stories – rules are being bent to get a low amount of work done. In our military analogy, we have soldiers on a hill but it’s the wrong high ground.

For illustrative purposes, the higher ground or new normalcy is the green checkmark where quality work is being completed on-time, on-budget, and within the safety rules. Thanks to the map, we now have our compass heading. The question is how do we make it easy to head to the higher ground? In other words, how might we get fewer stories at red X and more stories like green checkmark?

The challenge is the ground to be traversed is an entanglement of work-as-imagined safety policies, standards, rules, procedures and work-as-done practices. W-A-I is created by the Blunt end and W-A-D by the Sharp end of the safety spear. Another twisted layer is the formal vertical hierarchy and informal horizontal networks. Entanglement implies that any new normalcy thinking ought to include everyone and shines a different light on Diversity.

Heading in a Top-down only route to a new normalcy could cause inadvertent harm since it is clueless on how work really gets done in the organization. Read Peter’s firsthand experience of a company letting go of people with mission-critical tacit knowledge. Similarly a Bottom-up only route may fail to consider PESTLE tensions that impact the entire safety spear.

Shaping the new normalcy is not a change initiative with an authoritative few making a huge Ego bet. Think carefully about consultants recommending a huge disruptive transformation change. People are in a fragile state trying to personally survive COVID-19. Think the opposite. It’s Eco and empowering all in the CAS. Think not about doing it to people but doing it with people.

The Executive’s role is to mobilize diverse learning teams and act as central coordinator. These would not be large teams subjected to Order system linear forming -> storming ->norming ->performing crapola. Teams would be 3 people (trio) with the capacity of diverse knowledge and empowered to make decisions within their coherent unit. A cardinal rule would be a trio must inform Central what decisions they have made. Diversity is not necessarily the typical demographic lines of gender, age, geographic location but body of knowledge and informal network relationships. For example, a trio could be comprised of a senior executive, a new employee fresh out of college, and a front-line tradesperson. Another could be an IT specialist, a mechanic, and a public relations coordinator.

Each trio would have the skillset to design safe-to-fail experiments. But before launching, they would engage the Unintended Consequences trio. Good candidates for the UC trio would be the CFO, a risk analyst, and the  person with the reputation of being the noisiest complainer in the organization. Using a process like Ritual Dissent or Red Teaming, the UC trio has the task of pointing out the risks and any harm that a safe-to-fail experiment might unleash. Their role isn’t to kill but improve the proposal.

Central would have the capacity to navigate using a near real-time dashboard with maps generated by stories. All trios would have access to the maps to learn how their experiments are enabling desirable aspects of a new normalcy to emerge.

Informed by authentic voices (i.e, click on a map dot and read its story), Central would make the strategic choice what will be in or out of the new normalcy. Trios or more probably combinations would form innovation teams to implement.

Innovation teams would carry on resilient path [B] and move into the Complicated domain and apply project management practices. Key activities would be document, adapt constraints (i.e., policies, standards, rules, procedures), and train the workforce in the patterns of the new normalcy.

The continuous flow of stories is required for the dashboard and maps to maintain their near real-time value to manage the PM portfolio.  Establishing a human sensor network would also fuel the Early Detection capability [C]. Imagine being able to respond to “I’ve got a bad feeling about this” attitudinal stories well before they turn into “We wouldn’t be in this mess if someone had listened to me.”

In the new normalcy, everything would be up for scrutiny by trios including venerable best practices, sacred cows and of course, black elephants. Small, simple changes such as replacing weekly reports and entering stories whenever (24/7/365) and wherever (office/field/home) can go a long way.

Act now. Act quickly. Act differently.

References:

  1. The Black Elephant Challenge for Governments. Peter Ho, former head of civil service for the city of Singapore. 2017.
  2. Pandemics that Changed the Course of Human History. Business Insider. 2020-Mar-20.
  3. The next outbreak? We’re not ready. Bill Gates. TED Talk. 2015-04-03.
  4. Public-private cooperation for pandemic preparedness and response. Event 201 recommendations. 2019-10.
  5. China Created a Fail-Safe System to Track Contagions. It Failed. The New York Times. 2020-03-29.
  6. ‘Corona Challenge’: Man Tests Positive For COVID-19 Days After Licking Toilet Bowl. 2020-03-26.
  7. Florida college students test positive for coronavirus after going on spring break. CBS News. 2020-03-23.
  8. Denmark’s Idea Could Help the World Avoid a Great Depression. The Atlantic. 2020-03-21.
  9. Trudeau promises 75% wage subsidy for businesses hit by coronavirus pandemic. Global News. 2020-03-27.
  10. The government will now pay businesses to keep workers on payrolls (and hire back ones they laid off). Fast Company. 2020-04-02.

Evolution of Safety

Yesterday I was pleased to speak at the Canadian Society of Safety Engineering (CSSE)  Fraser Valley branch dinner.  I chose to change the title from the Future of to the Evolution of Safety.  Slides are available in the Downloads or click here.  The key messages in the four takeaways are listed below.

1. Treat workers not as problems to be managed but solutions to be harnessed.

Many systems have been designed with the expectation  humans will perform perfectly like machines. It’s a consequence of the Systems Thinking era based on an Engineering paradigm. Because humans are error prone, we must be managed so that we don’t mess up the ideal flow of processes using technologies we are trained to operate.

Human & Organizational Performance (HOP) Principle #1 acknowledges people are fallible. Even the best will make mistakes. Despite the perception humans are the “weakest link in the chain”,  harnessing our human intelligence will be critical for system resilience, the capacity to either detect or quickly recover from negative surprises.

As noted in the MIT Technology Review, “we’re seeing the rise of machines with agency, machines that are actors making decisions and taking actions autonomously…” That means things are going to get a lot more complex with machines driven by artificial intelligence algorithms. Smart devices behaving in isolation will create conflicting conditions that enable danger to emerge. Failure will occur when a tipping point is passed.

MIT Professor Nancy Leveson believes technology has advanced to such a point that the routine problem-solving methods engineers had long relied upon no longer suffice.  As complexity increases within a system, linear root cause analysis approaches lose their effectiveness. Things can go catastrophically wrong even when every individual component is working precisely as its designers imagined. “It’s a matter of unsafe interactions among components,” she says. “We need stronger tools to keep up with the amount of complexity we want to build into our systems.” Leveson developed her insights into an approach called system theoretic process analysis (STPA), which has rapidly spread through private industries and the military. It would be prudent for Boeing to apply STPA in its 737 Max 8 investigation. 

So why is it imperative that workers be seen as resourceful solutions?  Because complex systems will require controls that use the  immense power of the human brain to quickly recognize hazard patterns, make sense of bad situations created by ill-behaving machines, and  swiftly apply heuristics to prevent plunging into the Cynefin Chaotic domain.

2. When investigating, focus on the learning gap between normal deviation / hazard and avoid the blaming counterfactual.

If you read or hear someone say:
“they shouldn’t have…”
“they could have…”
“they failed to…”
“if only they had…”
it’s a counterfactual. In safety, counterfactuals are huge distractions because they focus what didn’t happen. As Todd Conklin explains, it’s the gap between the  black line (work-as-imagined) and the blue line (work-as-done). The wavy blue line indicates that a worker must adapt performance in response to varying conditions. The changes hopefully enable safety to emerge so that the job can successfully completed. In the Safety-II view, this is deemed normal deviation. Our attention should not be on “what if” but “what” did.

The counterfactual provides an easy path for assigning blame. “If only Jose had done it this way, then the accident wouldn’t have happened.”  Note to safety professionals engaged in accident investigations: Don’t give decision makers bullets to blame but information to learn. The learning from failure lessons are in the gap between the blue line and the hazard line.

3. Be a storylistener and ask storytellers:
How can we get more safety stories like these, fewer stories like those?

I described the ability to generate 2D contour maps from safety stories told by the workforce.  The WOW factor is we now can visually see safety culture as an attitudinal map. We can plot a direction towards a safety vision and monitor our progress.  Click here for more details.

Stories are powerful. Giving the worker a voice to be heard is an effective form of employee engagement. How safety professionals use the map to solve safety issues is another matter. Will it be Ego or Eco? It depends. Ego says I must find the answer. Eco says we can find the answer.

Ego thrives in hierarchy, an organizational  structure adopted from the Church and Military. It works in the Order system, the Obvious and Complicated domains of the Cynefin Framework. Just do it. Or get a bunch of experts together and direct them to come up with viable options. Then make your choice.

Safety culture resides in the Cynefin Complex domain. No one person is in charge. Culture emerges from the relationships and interactions between people, ideas, events, and as noted above, machines driven by AI algorithms. Eco thrives on diversity, collaboration, and co-evolution of the system.

An emerging role for safety professionals is helping Ego-driven decision makers understand they cannot control human behaviour in a complex adaptive system. What they control are the system constraints imposed as safety policies, standards, rules. They also set direction when expressing they want to hear “more safety stories like these, fewer stories like those.”

And less we forget, it’s not all about the workers at the front line. Decision makers and safety professionals are also storytellers. What safety stories are you willing to share? Where would your stories appear as dots on the safety culture map?

Better to be a chef and not a recipe follower.

If Safety had a cookbook, it would be full of Safety Science recipes and an accumulation of hints and tips gathered over a century of practice. It would be a mix of still useful, questionable (pseudoscience), emerging, and recipes given myth status by Carsten Busch.

In the Cynefin Complex and Chaotic domains, there are no recipes to follow. So we rely on heuristics to make decisions. Some are intuitive and based on past successes – “It’s worked before so I’ll do it again.” Until they don’t because conditions that existed in the past no longer hold true. More resilient heuristics are backed by natural science laws and principles so they withstand the test of time.

By knowing  the art and principles of cooking, a chef accepts the challenge of ambiguity and can adapt to unanticipated conditions such as missing ingredients, wrong equipment, last-minute diet restrictions, and so on.

It seems logical that safety professionals would want to be chefs. That’s why I’m curious in the study An ethnography of the safety professional’s dilemma: Safety work or the safety of work?  a highlight is “Safety professionals do not leverage safety science to inform their practice.”
Is it worth having a conversation about, even collecting a few stories? Or are safety professionals too time-strapped doing safety work?

The Future of Safety

Today I had the privilege and pleasure of speaking at the BCCGA AGM.  A copy of the slides presented can be downloaded here. In my conclusion I posed 4 questions for the BCCGA and its member organizations to consider.

1. What paradigm(s) should our safety vision be based upon?

The evolution of safety thinking can be viewed through 4 Ages.

The recurring theme is about how Humans were treated as new technologies were implemented into business practices. It’s logical that the changes in safety thinking mirror the evolution of Business Practices. The Ages of Technology, Human Factors, and Safety Management are rooted in an Engineering paradigm.

It’s Systems Thinking with distinct parts: People, Process, Technology. Treat them separately and then put them together to deliver a Strategy.  Reductionism works well when the system is stable, consistent, and relatively fixed by constraints imposed by humans (e.g., regulations, policies, standards, rules). However, in addition to ORDER, there are 2 other systems: COMPLEX and CHAOTIC in the real world. These two are constantly changing so a reductionistic approach is not appropriate. One must work holistically with an Ecological paradigm.

This diagram from the Cynefin Centre shows the relative sizing of the 3 systems. Complexity by far is the largest and continues to grow.  All organizations are complex adaptive system. A worthy safety vision must include the Age of Cognitive Complexity and view Safety as an emergent property of a complex adaptive system. The different thinking means rules don’t create safety but create the conditions that enable safety to emerge. Now we can understand why piling on more and more rules can lead to cognitive overload in workers and enable danger, not safety, to emerge.

2. How should we treat workers – as problems to be managed or solutions to be harnessed?

The Age of Technology and Age of Human Factors treated workers as problems – as cogs in a machine and as hazards to be controlled. The Age of Safety Management view recognizes that rules cannot cover every situation. Variability isn’t a threat but a necessity. We need to trust that humans always try to do what they think is right in the situation. The Age of Cognitive Complexity appreciates that humans think differently than logical information-processing machines in an Engineering paradigm. Humans are not rational thinkers; decisions are based on emotional reactions & heuristic shortcuts. As storytellers, people can articulate thick data that a typical report is unable to provide.  As solution providers, workers can call upon tacit knowledge – difficult to transfer to another person by means of writing it down or verbalizing it. Workers who feel like cogs or hazards tend to stay within themselves for fear of punishment. Knowledge is volunteered; never conscripted.

3. What safety heuristics can we share?

While Best Practices manuals are beneficial,  heuristics are on a  bigger stage when dealing with decisions. Humans make 95% of their decisions using heuristics. Heuristics are mental shortcuts to help people make quick, satisfactory but not perfect decisions.

They are the rules of thumb that Masters pass on to their Apprentices. Organizations ought to have a means to collect Safety-II success stories and use pattern recognition tools. Heuristics that emerge can be distributed to Masters for accuracy scrutiny.

4. How can we get more safety stories like these, fewer stories like those?

This question pertains to a new way of shaping a safety vision through the use of narratives (stories, pictures, voice recordings, drawings, sketches, etc.)

Narratives are converted into data points to generate a 2D contour map or fitness landscape
Each dot is a story and seen together they form patterns. The map shows the general direction we want to head – top right corner (High compliance with rules & High level of getting the job done). Clearly we want more safety stories in the Green area.  We also want fewer in the Red and Brown areas. Here’s the rub: If we try to go directly for the top right corner, we won’t get there.  This is ATTITUDE mapping at a level way deeper than observable BEHAVIOUR. Instead we head for an Adjacent Possible.
We get people to tell more stories here, fewer there  by changing a human constraint. It might be loosening a controlling constraint like a rule or practice. It could also be introducing an enabling constraint like a new tool or process.
We gather more stories and monitor how the clusters are changing in real-time. The evolving landscape maps a new Present state – a new starting point. We then change another constraint. Since we can’t predict outcomes both positive and unintended negative consequences might emerge. We accelerate the positives and dampen the negatives. In essence we co-evolve our way to the top right corner of the map. This is how we shape our Safety Culture.

7 Implications of Complexity for Safety

One of my favourite articles is The Complexity of Failure written by Sidney Dekker, Paul Cilliers, and Jan-Hendrik Hofmeyr.  In this posting I’d like to shed more light on the contributions of Paul Cillliers.

Professor Cilliers was a pioneering thinker on complexity working across both the humanities and the sciences. In 1998 he published Complexity and Postmodernism: Understanding Complex Systems which offered implications of complexity theory for our understanding of biological and social systems. Sadly he suddenly passed away in 2011 at the much too early age of 55 due to a massive brain hemorrhage.

My spark for writing comes from a blog recently penned by a complexity colleague Sonja Bilgnaut.  I am following her spade work by exploring  the implications of complexity for safety. Cilliers’ original text is in italics.

  1. Since the nature of a complex organization is determined by the interaction between its members, relationships are fundamental. This does not mean that everybody must be nice to each other; on the contrary. For example, for self-organization to take place, some form of competition is a requirement (Cilliers, 1998: 94-5). The point is merely that things happen during interaction, not in isolation.
  • Because humans are natural storytellers, stories are a widely used  interaction between fellow workers, supervisors, management, and executives. We need to pay attention to  stories told about daily experiences since they provide a strong signal of the present safety culture.
  • We should devote less time trying to change people and their behaviour and more time building relationships.  Despite what psychometric profiling offers, humans are too emotional and unpredictable to accurately figure out. In my case, I am not a trained psychologist so my dabbling trying to change how people tick might be dangerous, on the edge of practising pseudoscience.  I prefer to stay with the natural sciences (viz., physics, biology), the understanding of phenomena in Nature which have evolved over thousands of years.
  • If two workers are in conflict, don’t demand that they both smarten up. Instead, change the nature of relationship so that their interactions are different or even extinguished. Simple examples are changing the task or moving one to another crew.
  • Interactions go beyond people. Non-human agents include machines, ideas (rules, policies, regs) and events (meeting, incident). A worker following a safety rule can create a condition to enable safety to emerge. Too many safety rules can overwhelm and frustrate a worker enabling danger to emerge.

2. Complex organizations are open systems. This means that a great deal of energy and information flows through them, and that a stable state is not desirable.

  • A company’s safety management system (SMS) is a closed system.  In the idealistic SMS world,  stability, certainty, and predictability are the norms. If a deviation occurs, it needs to be controlled and managed. Within the fixed boundaries, we apply reductionistic thinking and place information into a number of safety categories, typically ranging from 4 to 10. An organizational metaphor is sorting solid LEGO bricks under different labels.
    In an open system, it’s different. Think of boundary-less fog and irreducible mayonnaise. If you outsource to a contractor or partner with an external supplier, how open is your SMS? Will you insist on their compliance or draw borders between firms? Do their SMS safety categories blend with yours?
  • All organisations are complex adaptive systems. Adaptation means not lagging behind and plunging into chaotic fire-fighting. It means looking ahead and not only trying to avoid things going wrong, but also trying to ensure that they go right. In the field, workers when confronted by unexpected varying conditions will adjust/adapt their performance to enable success (and safety) to emerge.
  • When field adjustments occasionally fail, it results in a new learning to be shared as a story. This is also why a stable state is not desirable. In a stable state, very little learning is necessary. You just repeat doing what you know.

3. Being open more importantly also means that the boundaries of the organization are not clearly defined. Statements of “mission” and “vision” are often attempts to define the borders, and may work to the detriment of the organization if taken too literally. A vital organization interacts with the environment and other organizations. This may (or may not) lead to big changes in the way the organization understands itself. In short, no organization can be understood independently of its context.

  • Mission and Vision statements are helpful in setting direction. A vector, North Arrow, if you like. They become detrimental if communicated as some idealistic future end state the organization must achieve.
  • Being open is different than “thinking out of the box” because there really is no box to start with. It’s a contextual connection of relationships with other organizations. It’s also a foggy because some organizations are hidden. You can impact organizations that you don’t even know  about and conversely, their unbeknownst actions can constrain you.
    The smart play is to be mindful by staying focused on the Present and monitor desirable and undesirable outcomes as they emerge.

4. Along with the context, the history of an organization co-determines its nature. Two similar-looking organizations with different histories are not the same. Such histories do not consist of the recounting of a number of specific, significant events. The history of an organization is contained in all the individual little interactions that take place all the time, distributed throughout the system.

  • Don’t think about creating a new safety mission or vision by starting with a blank page, a clean sheet, a greenfield.  The organization has history that cannot be erased. The Past should be honoured, not forgotten.
  • Conduct an ongoing challenge of best practices and Life-saving rules. Remember the historical reasons why these were first installed. Then question if these reasons remain valid.
  • Be aware of the part History plays when rolling out a safety initiative across an organization.
    • If it’s something that everyone genuinely agrees to and wants, then just clone & replicate. Aggregation is the corollary of reductionism and it is the common approach to both scaling and integration. Liken it to putting things into org boxes and then fitting them together like a jigsaw. The whole is equal to the sum of its parts.
    • But what if the initiative is controversial? Concerns are voiced, pushback is felt, resistance is real. Then we’re facing complexity where the properties of the safety system as a whole is not the sum of the parts but are unique to the system as a whole.
      If we want to scale capabilities we can’t just add them together. We need to pay attention to history and understand reactions like “It won’t work here”, “We tried that before”, “Oh no! Not again!”
      The change method is not to clone & replicate.  Start by honouring local context. Then decompose into stories to make sense of the culture. Discover what attracts people to do what they do. Recombine to create a mutually coherent solution.

5. Unpredictable and novel characteristics may emerge from an organization. These may or may not be desirable, but they are not by definition an indication of malfunctioning. For example, a totally unexpected loss of interest in a well-established product may emerge. Management may not understand what caused it, but it should not be surprising that such things are possible. Novel features can, on the other hand, be extremely beneficial. They should not be suppressed because they were not anticipated.

  • In the world of safety, failures are unpredictable and undesirable. They emerge when a hidden tipping point is reached.
    As part of an Emergency Preparedness plan, recovery crews with well-defined roles are designated. Their job is to fix the system as quickly as possible and safely restore it to its previous stable state.
  • Serendipity is an unintended but highly desirable consequence. This implies an organization should have an Opportunity crew ready to activate. Their job is to explore the safety opportunity, discover new patterns which may lead to a new solution, and exploit their benefits.
    At a tactical level, the new solution may be a better way of achieving the Mission and Vision. In the same direction but a different path or route.
    At a strategic level, the huge implication is that new opportunity may lead to a better future state than the existing carefully crafted, well-intentioned one. Decision-makers are faced with a dilemma: do we stay the course or will we adapt and change our vector?
  • Avoid introducing novel safety initiatives as big events kicked off with a major announcement. These tend to breed cynicism especially if the company history includes past blemished efforts. Novelty means you honestly don’t know what the outcomes will be since it will be a new experience to those you know (identified stakeholders) and those you don’t know in the foggy network.
    Launch as a small experiment.
    If desirable consequences are observed, accelerate the impact by widening the scope.
    If unintended negative consequences emerge, quickly dampen the impact or even shut it down.
    As noted in (2), constructively de-stabilize the system in order to learn.

6. Because of the nonlinearity of the interactions, small causes can have large effects. The reverse is, of course, also true. The point is that the magnitude of the outcome is not only determined by the size of the cause, but also by the context and by the history of the system. This is another way of saying that we should be prepared for the unexpected. It also implies that we have to be very careful. Something we may think to be insignificant (a casual remark, a joke, a tone of voice) may change everything. Conversely, the grand five-year plan, the result of huge effort, may retrospectively turn out to be meaningless. This is not an argument against proper planning; we have to plan. The point is just that we cannot predict the outcome of a certain cause with absolute clarity.

  • The Butterfly effect is a phenomenon of a complex adaptive system. I’m sure many blog writers like myself are hoping that our safetydifferently cause will go viral, “cross the chasm”, and be adopted by the majority. Sonja in her blog refers to a small rudder that determines the direction of even the largest ship. Perhaps that’s what we are: trimtabs!
  • On the negative side, think of a time when an elected official or CEO made a casual remark about a safety disaster only to have it go viral and backfire. In 2010 Deep Horizon disaster then CEO Tony Hayward called the amount of oil and dispersant “relatively tiny” in comparison with the “very big ocean”.  Hayward’s involvement has left him a highly controversial public figure.
  • Question: Could a long-term safety plan to progress through the linear stages of a Safety Culture Maturity model be a candidate as a meaningless five-year plan?
    If a company conducts an employee early retirement or buy-out program, does it regress and fall down a stage or two?
    If a company deploys external contractors with high turnover, does it ever get off the bottom rung?
    Instead of a linear progression model, stay in the Present and listen to the stories internal and external workers are telling. With the safety Vision in mind, ask what can we do to hear more stories like these, fewer stories like those.
    As the stories change, so will the safety culture.  Proper planning is launching small experiments to shape the culture.

7. Complex organizations cannot thrive when there is too much central control. This certainly does not imply that there should be no control, but rather that control should be distributed throughout the system. One should not go overboard with the notions of self-organization and distributed control. This can be an excuse not to accept the responsibility for decisions when firm decisions are demanded by the context. A good example here is the fact that managers are often keen to “distribute” the responsibility when there are unpopular decisions to be made—like retrenchments—but keen to centralize decisions when they are popular.

  • I’ve noticed safety professionals are frequent candidates for organization pendulum swings. One day you’re in Corporate Safety. Then an accident occurs and in the ensuing investigation a recommendation is made to move you into the field to be closer to the action. Later a new Director of Safety is appointed and she chooses to centralize Safety.
    Pendulum swings are what Robert Fritz calls Corporate Tides, the natural ebb and flow of org structure evolution.
  • Central v distributed control changes are more about governance/audit rather than workflow purposes. No matter what control mechanism is in vogue, it should enable stigmergic behaviour, the natural forming of network clusters to share knowledge, processes, and practices.
  • In a complex adaptive system, each worker is an autonomous decision-maker, a solution not a problem. Decisions made are based on information at hand (aka tacit knowledge) and if not available, knowing who, where, how to access it. Every worker has a knowledge cluster in the network. A safety professional positioned in the field can mean quicker access but more importantly, stronger in-person interactions. This doesn’t discount a person in Head Office who has a trusting relationship from being a “go to” guy. Today’s video conferencing tools can place the Corp Safety person virtually on site in a matter of minutes.
Thanks, Sonja. Thanks, Paul.
Note: If you have any comments, I would appreciate if you would post them at safetydifferently.com.

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSaveSaveSave

SaveSaveSaveSave

SaveSave

Safety Culture, the Movie

SWForce
It’s the holiday season. One terrific way to celebrate as a family is to see a movie together. Our pick? Star Wars: the Force Awakens.
Well, that was an easy decision. The next one is harder though…what sort of experience do we want? Will it be UltraAVX, D-Box, IMAX, 3D, VIP, Dolby ATMOS or Surround sound, or standard digital? While sorting through the movie options, for some reason I began thinking about safety culture and had an epiphany. Safety culture is a movie, not a photo.

A photo would be a Star Wars poster, a single image that a designer has artistically constructed. It’s not the full story, just a teaser aimed to influence people to buy a ticket. We understand this and don’t expect to comprehend the entire picture from one poster. A safety culture survey or audit should be treated in the same fashion. All we see is a photo, a snapshot capturing a moment in time. Similar to the poster artist, a survey designer also has a preconceived idea; influence is in the form of questions. The range of questions extends from researched “best practices” to personal whim.

I believe this is a major limitation of the survey/audit/poster. It could totally miss what people actually experience as a movie. A movie introduces visual motion and audible sound to excite our human senses and release emotions and feelings. We can watch behaviours as well as their positive and negative consequences being delivered. With flow we are able to sense operating point movement and drift into the zone of complacency and eventual failure. A safety culture has sound that a photo cannot reveal. In a movie you can hear loud communication, quiet conversations, or lack of (e.g., cone of silence).

If we were to create “Safety Culture, the Movie”, what would we need to consider? I’ve compiled a short list. What would you add?

  • Human actor engagement
    • Actors on screen – lead characters, supporting players, cameo appearances, cast in crowds; front-line workers, supervisors, safety professionals, public at large
    • Actors behind the screen – investors, producer, director, music arranger, theatre owners, craft guilds; Board, execs, project managers, suppliers, unions
    • Actors in front of the screen – paying audience, theatre staff, movie critics; customers, safety associations, regulatory inspectors
  • Story line
    • Safety culture is one big story
    • Safety culture movie is neverending
    • Within the one big story are several side stories, episodes, subplots
  • Relationships between characters and roles played
    • Heroes, villains, maidens in distress, comic relief, clones
    • Contact is continuous and relationships can shift over time (compare to a snapshot audit focusing on one scene at a particular time slot)
    • What seems in the beginning to be independent interactions are often interconnected (“Luke…I am your father”) and may lead to a dilemma or paradox later
  • Theme
    • Overt message of the safety culture movie – Good triumphs Evil? Might makes Right? Focus on what goes wrong? Honesty? Respect?
    • Hidden messages – Resistance is futile? Pay attention to outliers? Do what I say, not what I do? It’s all about the optics
    • Significance of myths, legends, rituals in the safety culture – the Dark side, Jedi order, zero harm workplace, behaviour-based safety
  • Critique
    • What can we learn if our movie is scored on a Flixter (Rotten tomatoes) scale out of 100?
      • What does a score of 30, 65, 95 tell us about performance? Success?
      • We can learn from each critic and fan comment standing on its own rather than dealing with a mathematical average
    • Feedback will influence and shape the ongoing movie
      • Too dark, not enough SFX, too many safety rules, not enough communication
    • Artifacts
      • A poster provides a few while a movie contains numerous for the discerning eye
      • Besides the artifacts displayed on screen, many are revealed in the narratives actors share with each other off screen during lunch hours, breaks, commutes
      • What might we further learn by closely examining artifacts? For instance, what’s the meaning behind…
        • Leia’s new hairdo (a new safety compliance policy)?
        • The short, funny looking alien standing next to R2D2 and C3PO (a safety watcher?)
        • Why Han Solo can’t be separated from his blaster (just following a PPE rule)?
        • Rey using some kind of staff weapon, perhaps similar to the staffs used by General Grievous’ body guards in Episode III (is it SOP certified)?
        • The star destroyers eliminating the control towers which caused them so many problems in the Battle of Endor (implementation of an accident investigation recommendation)?
        • Improvement in the X-Wing fighters (a safety by design change after an action review with surviving pilots?)

If you’d like to share your thoughts and comments, I suggest entering them at the safety differently.com website.

Season’s greetings, everyone! And may the Force be with you.

Do you lead from the Past or lead to the Future?

Ahhah

Recently Loren Murray, Head of Safety for Pacific Brands in Australia penned a thought provoking blog on the default future, a concept from the book  ‘The Three Laws of Performance’. I came across the book a few years ago and digested it from a leader effectiveness standpoint. Loren does a nice job applying it to a safety perspective.

“During my career I noticed that safety professionals (and this included myself) have a familiar box of tricks. We complete risk assessments, enshrine what we learn into a procedure or SOP, train on it, set rules and consequences, ‘consult’ via toolboxes or committees and then observe or audit.

When something untoward happens we stop, reflect and somehow end up with our hands back in the same box of tricks writing more procedures, delivering more training (mostly on what people already know), complete more audits and ensure the rules are better enforced….harder, meaner, faster. The default future described in The Three Laws of Performance looked a lot like what I just described!

What is the default future? We like to think our future is untold, that whatever we envision for our future can happen….However for most of us and the organisations we work for, this isn’t the case. To illustrate. You get bitten by a dog when you are a child. You decide dogs are unsafe. You become an adult, have kids and they want a dog. Because of your experiences in the past it is unlikely you will get a dog for your kids. The future isn’t new or untold it’s more of the past. Or in a phrase, the past becomes our future. This is the ‘default future’.

Take a moment to consider this. It’s pretty powerful stuff with implications personally and organisationally. What you decide in the past will ultimately become your future.

How does this affect how we practice safety? Consider our trusty box of tricks. I spent years learning the irrefutable logic of things like the safety triangle and iceberg theory. How many times have I heard about DuPont’s safety journey? Or the powerful imagery of zero harm. The undeniable importance of ‘strong and visible’ leadership (whatever that means) which breeds catch phrases like safety is ‘priority number one’.

These views are the ‘agreement reality’ of my profession. These agreements have been in place for decades. I learnt them at school, they were confirmed by my mentors, and given credibility by our regulators and schooling system. Some of the most important companies in Australia espouse it, our academics teach it, students devote years to learning it, workers expect it…. Our collective safety PAST is really powerful.”

 
Loren’s blog caused me to reflect on the 3 laws and how they might be applied in a complexity-based safety approach. Let’s see how they can help us learn so that we don’t keep on repeating the past.
First Law of Performance
“How people perform correlates to how situations occur to them.”
It’s pretty clear that the paradigms which dominate current safety thinking view people as error prone or create problems working in idealistic technological systems, structures, and processes. Perplexed managers get into a “fix-it” mode by recalling what worked in the past and assume that is the solution going forward. It’s being mindful of perception blindness and opening both eyes.
Second Law of Performance
“How a situation occurs arises in language.”

As evidence-based safety analysts, we need to hear the language and capture the conversations. One way is the Narrative approach where data is collected in the form of stories. We may even go beyond words and collect pictures, voice recordings, water cooler snippets, grapevine rumours, etc. When we see everything as a collective, we can discover themes and patterns emerging. These findings could be the keys that lead to an “invented” future.

Third Law of Performance
“Future-based language transforms how situations occur to people.”

Here are some possible yet practical shifts you can start with right now:

  • Let’s talk less about inspecting to catch people doing the wrong things and talk more about Safety-II; i.e., focusing on doing what’s right.
  • Let’s talk less about work-as-imagined deviations and more about work-as-done adjustments; i.e., less blaming and more appreciating and learning how people adjust performance when faced with varying, unexpected conditions.
  • Let’s talk less about past accident statistics and injury reporting systems and talk more about sensing networks that trigger anticipatory awareness of non-predictable negative events.
  • Let’s talk less about some idealistic Future state vision we hope to achieve linearly in a few years and talk more about staying in the Present, doing more proactive listening, and responding to the patterns that emerge in the Now.
  • And one more…let’s talk less about being reductionists (breaking down a social-technical system into its parts) and talk more about being holistic and understanding how parts (human, machines, ideas, etc.) relate, interact, and adapt together in a complex work environment.

The “invented” future conceivably may be one that is unknowable and unimaginable today but will emerge with future-based conversations.

What are you doing as a leader today? Leading workers to the default future or leading them to an invented Future?

Click here to read Loren’s entire blog posting.

When thinking of Safety, think of coffee aroma

CoffeeSafety has always been a hard sell to management and to front-line workers because, as Karl Weick put forward, Safety is a dynamic non-event. Non-events are taken for granted. When people see nothing, they presume that nothing is happening and that nothing will continue to happen if they continue to act as before.

I’m now looking at Safety from a complexity science perspective as something that emerges when system agents interact. An example is aroma emerging when hot water interacts with dry coffee grinds. Emergence is a real world phenomenon that System Thinking does not address.

Safety-I and Safety-II do not create safety but provide the conditions for Safety to dynamically emerge. But as a non-event, it’s invisible and people see nothing. Just as safety can emerge, so can danger as an invisible non-event. What we see is failure (e.g., accident, injury, fatality) when the tipping point is reached. We can also reach a tipping point when we do much of a good thing. Safety rules are valuable but if a worker is overwhelmed by too many, danger in terms of confusion, distraction can emerge.

I see great promise in advancing the Safety-II paradigm to understand what are the right things people should be doing under varying conditions to enable safety to emerge.

For further insights into Safety-II, I suggest reading Steven Shorrock’s posting What Safety-II isn’t on Safetydifferently.com. Below are my additional comments under each point made by Steven with a tie to complexity science. Thanks, Steven.

Safety-II isn’t about looking only at success or the positive
Looking at the whole distribution and all possible outcomes means recognizing there is a linear Gaussian and a non-linear Pareto world. The latter is where Black Swans and natural disasters unexpectedly emerge.

Safety-II isn’t a fad
Not all Safety-I foundations are based on science. As Fred Manuelle has proven, Heinrich’s Law is a myth. John Burnham’s book Accident Prone offers a historical rise and fall of the accident proneness concept. We could call them fads but it’s difficult to since they have been blindly accepted for so long.

This year marks the 30th anniversary of the Santa Fe Institute where Complexity science was born. At the May 2012 Resilience Lab I attended, Erik Hollnagel and Richard Cook introduced the RMLA elements of Resilience engineering: Respond, Monitor, Learn, Anticipate. They fit with Cognitive-Edge’s complexity view of Resilience: Fast recovery (R), Rapid exploitation (M,L), Early detection (A). This alignment had led to one way to operationalize Safety-II.

Safety-II isn’t ‘just theory’
As a pragmatist, I tend to not use the word “theory” in my conversations. Praxis is more important to me instead of spewing theoretical ideas. When dealing with complexity, the traditional Scientific Method doesn’t work. It’s not deductive nor inductive reasoning but abductive. This is the logic of hunches based on past experiences  and making sense of the real world.

Safety-II isn’t the end of Safety-I
The focus of Safety-I is on robust rules, processes, systems, equipment, materials, etc. to prevent a failure from occurring. Nothing wrong with that. Safety-II asks what can we do to recover when failure does occur plus what can we do to anticipate when failure might happen.

Resilience can be more than just bouncing back. Why return to the same place only to be hit again? Early exploitation means finding a better place to bounce to. We call it “swarming” or Serendipity if an opportunity unexpectedly arises.

Safety-II isn’t about ‘best practice’
“Best” practice does exist but only in the Obvious domain of the Cynefin Framework. It’s the domain of intuition and the Thinking Fast in Daniel Kahneman’s book Thinking Fast and Slow. What’s the caveat with best practices? There’s no feedback loop. So people just carry on as they did before.  Some best practices become good habits. On the other hand, danger can emerge from the baddies and one will drift into failure.

Safety-II and Resilience is about catching yourself before drifting into failure. Being alert to detect weak signals (e.g., surprising behaviours, strange noises, unsettling rumours) and having physical systems and people networks in place to trigger anticipatory awareness.

Safety-II isn’t what ‘we already do’
“Oh, yes, we already do that!” is typically expressed by an expert. It might be a company’s line manager or a safety professional. There’s minimal value challenging the response.  You could execute an “expert entrainment breaking” strategy. The preferred alternative? Follow what John Kay describes in his book Obliquity: Why Our Goals are Best Achieved Indirectly.

Don’t even start by saying “Safety-II”. Begin by gathering stories and making sense of how things get done and why things are done a particular way. Note the stories about doing things the right way. Chances are pretty high most stories will be around Safety-I. There’s your data, your evidence that either validates or disproves “we already do”. Tough for an expert to refute.

Safety-II isn’t ‘them and us’
It’s not them/us, nor either/or, but both/and.  Safety-I+Safety-II. It’s Robustness + Resilience together.  We want to analyze all of the data available, when things go wrong and when things go right.

The evolution of safety can be characterized by a series of overlapping life cycle paradigms. The first paradigm was Scientific Management followed by the rise of Systems Thinking in the 1980s. Today Cognition & Complexity are at the forefront. By honouring the Past, we learn in the Present. We keep the best things from the previous paradigms and let go of the proven myths and fallacies.

Safety-II isn’t just about safety
Drinking a cup of coffee should be a total experience, not just tasting of the liquid. It includes smelling the aroma, seeing the Barista’s carefully crafted cream design, hearing the first slurp (okay, I confess.) Safety should also be a total experience.

Safety can emerge from efficient as well as effective conditions.  Experienced workers know that a well-oiled, smoothly running machine is low risk and safe. However, they constantly monitor by watching gauges, listening for strange noises, and so on. These are efficient conditions – known minimums, maximums, and optimums that enable safety to emerge. We do things right.

When conditions involve unknowns, unknowables, and unimaginables, the shift is to effectiveness. We do the right things. But what are these right things?

It’s about being in the emerging Present and not worrying about some distant idealistic Future. It’s about engaging the entire workforce (i.e., wisdom of crowds) so no hard selling or buying-in is necessary.  It’s about introducing catalysts to reveal new work patterns.  It’s about conducting small “safe-to-fail” experiments to  shift the safety culture. It’s about the quick implementation of safety solutions that people want now.

Signing off and heading to Starbucks.

Safety-I + Safety-II

At a July 03 hosted conference Dave Snowden and Erik Hollnagel shared their thoughts about safety. Dave’s retrospects of their meeting are captured in his blog posting. Over the next few blogs I’ll be adding my reflections as a co-developer of Cognitive-Edge’s Creating and Leading a Resilient Safety Culture course.

Erik introduced Safety-II to the audience, a concept based on an understanding of what work actually is, rather than what it is imagined to be. It involves placing more focus on the everyday events when things go right rather than on errors, incidents, accidents when things go wrong. Today’s dominating safety paradigm is based on the “Theory of Error”. While Safety-I thinking has advanced safety tremendously, its effectiveness is waning and is now on the downside of the S-curve. Erik’s message is that we need to escape and move to a different view based on the “Theory of Action”.

Erik isn’t alone. Sidney Dekker’s latest presentation on the history of safety reinforces how little safety thinking has changed and how we are plateauing. Current programs such as Hearts & Minds continue to assume people have physical, mental, and moral shortcomings as was done way back in the early 1900s.

Dave spoke about Resilience and why the is critical as its the outliers where you find threat and opportunity. In our CE safety course, we refer to the Safety-I events that help prevent things from going wrong as Robustness. This isn’t an Either/Or situation but a Both/And. You need both Robustness + Resilience.

As a young electrical utility engineer, the creator of work-as-imagined, I really wanted feedback but struggled obtaining it. It wasn’t until I developed a rapport with the workers was I able to close the feedback loop to make me a better designer. Looking back I realize how fortunate I was since the crews were in proximity and exchanges were eye-to-eye.

During these debriefs I probably learned more from the “work-as-done” stories. I was told changes were necessary due to something that I had initially missed or overlooked. But more often it was due to an unforeseen situation in the field such as a sudden shift in weather or unexpected interference from other workers at the job site. Crews would make multiple small adjustments to accommodate varying conditions without fuss, bother, and okay, the occasional swear word.

I didn’t know it then but I know now: these were adjustments one learns to anticipate in a complex adaptive system. It was also experiencing Safety-II and Resilience in action in the form of narratives (aka stories).

A pathetic safety ritual endlessly recycled

Dave Johnson is Associate Publisher and Chief Editor of ISHN, a monthly trade publication targeting key safety, health and industrial hygiene buying influencers at manufacturing facilities of all sizes.  In his July 09 blog (reprinted below), he laments how the C-suite continues to take a reactive rather than proactive approach to safety. Here’s a reposting of my comments.

Let’s help the CEOs change the pathetic ritual

Dave: Your last paragraph says it all. We need to change the ritual. The question is not why or what, but how. One way is to threaten CEOs with huge personal fines or jail time. For instance, in New Zealand a new Health and Safety at Work Act is anticipated to be passed in 2014. The new law will frame duties around a “person conducting a business or undertaking” or “PCBU”. The Bill as currently drafted does not neatly define “PCBU” but the concept would appear to cover employers, principals, directors, even suppliers; that is, people at the top. A tiered penalty regime under the new Act could see a maximum penalty of $3 million for a body corporate and $600,000 and/or 5 years’ imprisonment for an individual. Thrown into jail due to unsafe behaviour by a contractor’s new employee whom you’ve never met would certainly get your attention.

But we know the pattern: Initially CEOs will order more compliance training, inspections, more safety rules. Checkers will be checking checkers. After a few months of no injuries, everyone will relax and as Sidney Dekker cautioned, complacency will set in and the organization will drift to failure. Another way is to provide CEOs with early detection tools with real-time capability. Too often we read comments in an accident report like “I felt something ominous was about to happen” or “I told them but nobody seemed to listen.”

CEOs need to be one of the first, not the last, to hear about a potential hazard identified but not being addressed. We now have the technology to allow an organization to collect stories from the front line and immediately convert them to data points which can be visually displayed. Let’s give CEOs and higher-ups the ability to walk the talk. In addition, we apply a complexity-based approach where traditional RCA investigative methods are limited. Specifically, we need to go “below the water line” when dealing with safety culture issues to understand the why rituals persist. 

Gary Wong
July 16, 2014

G.M.’s CEO is the latest executive to see the light

By Dave Johnson July 9, 2014

Wednesday, June 11, 2014, at the bottom right-hand corner of the section “Business Day” in The New York Times, is a boxed photograph of General Motors’ chief executive Mary T. Barra. The headline: “G.M. Chief Pledges A Commitment to Safety.”

Nothing against Ms. Barra. I’m sure she is sincere and determined in making her pledge. But I just shook my head when I saw this little “sidebar” box and the headline. Once again, we are treated to a CEO committing to safety after disaster strikes, innocent people are killed (so far G.M. has tied 13 deaths and 54 accidents to the defective ignition switch), and a corporation’s reputation is dragged through the media mud. The caption of Ms. Barra’s pic says it all: “…Mary T. Barra told shareholders that the company was making major changes after an investigation of its recall of defective small cars.”

Why do the commitments, the pledges and the changes come down from on high almost invariably after the fact?

You can talk all you want about the need to be proactive about safety, and safety experts have done just that for 20 or 30 or more years. Where has it gotten us, or more precisely, what impact has it had on the corporate world?

Talk all you want
Talk all you want about senior leaders of corporations needing to take an active leadership role in safety. Again, safety experts have lectured and written articles and books about safety leadership for decades. Sorry, but I can’t conjure the picture of most execs reading safety periodical articles and books. I know top organization leaders have stressful jobs with all sorts of pressures and competing demands. But I have a hard time picturing a CEO carving out reading time for a safety book in the evening. Indeed a few exist; former Alcoa CEO Paul O’Neill is the shining example. But they are the exceptions that prove the rule. The National Safety Council’s Campbell Institute of world class safety organizations and CEOs who “get it” are the exceptions, too, I’d assert.

And what is the rule? As a rule, proven again and again ad nauseam, top leaders of large corporations only really get into safety when they’re forced into a reactive mode. For the sake of share price and investor confidence, they speak out to clean up a reputational mess brought about by a widely publicized safety tragedy. Two space shuttles explode. Refineries blow up. Mines cave in. The incident doesn’t have to involve multiple fatalities and damning press coverage. I’ve talked with and listen to more than one plant manager or senior organization leader forced to make that terrible phone call to the family of a worker killed on the job, and who attended the funeral. The same declaration is stressed time and again: “Never again. Never again am I going to be put in the position of going through that emotional trauma. Business school never prepared me for that.”

“In her speech to shareholders, Ms. Barra apologized again to accident victims and their families, and vowed to improve the company’s commitment to safety,” reported The New York Times. “Nothing is more important than the safety of our customers,” she said. “Absolutely nothing.”

Oh really? What about the safety of G.M.’s workers? Oh yes, it’s customers who drive sales and profits, not line workers. This is cold business reality. Who did G.M.’s CEO want to get her safety message across to? She spoke at G.M.’s annual shareholder meeting in Detroit. Shareholders’ confidence needed shoring up. So you have the tough talk, the very infrequent public talk, about safety.

Preaching to the choir
I’ve just returned from the American Society of Safety Engineers annual professional development conference in Orlando. There was a raft of talks on safety leadership, what senior leaders can and should do to get actively involved in safety. There were presentations on the competitive edge safety can give companies. If an operation is run safely, there are fewer absences, better morale, good teamwork, workers watching out for each other, cohesiveness, strong productivity and quality and brand reputations. The classic counter-argument to the business case was also made: safety is an ethical and moral imperative, pure and simple.

But who’s listening to this sound advice and so-called thought leadership? As NIOSH Director Dr. John Howard pointed out in his talk, the ASSE audience, as with any safety conference audience, consists of the true believers who need no convincing. How many MBAs are in the audience?

Too often the moral high ground is swamped by the short-term, quarter-by-quarter financials that CEOs live or die by. Chalk it up to human nature, perhaps. Superior safety performance, as BST’s CEO Colin Duncan said at ASSE, results in nil outcomes. Nothing happens. CEOs are not educated to give thought and energy to outcomes that amount to nothing. So safety is invisible on corner office radar screens until a shock outcome does surface. Then come the regrets, the “if only I had known,” the internal investigation, the blunt, critical findings, the mea culpas, the “never again,” the pledge, the commitment, the vow, the tough talk.

There’s that saying, “Those who do not learn from history are bound to repeat it.” Sadly, and to me infuriatingly, a long history of safety tragedies has not proven to be much of a learning experience for many corporate leaders. “Ah, that won’t happen to us. Our (injury) numbers are far above average.” Still, you won’t have to wait long for the next safety apology to come out of mahogany row. It’s a pathetic ritual endlessly recycled.