All posts by Gary Wong

UBC BASc 1971

Designing for Emergence

Last year Naomi Stanford posted a LinkedIn article questioning whether we can design for emergence or just set up the conditions to enable emergence.  To answer her posed question, I replied the business change function I like to use involves Cynefin Dynamics.

You start in the Disorder domain and collect stories to understand the present situation. The stories may lead you to the Cynefin Complicated domain where traditional Change Management practices and tools are useful. We can call on experts to analyze and develop idealistic future state solutions. Targets, milestones as governing system constraints work well because behaviour is consistent, repeatable, predictable.

However, if the stories are full of uncertainty, confusion, ambiguity in the form of dilemmas and paradoxes, then we move into the Cynefin Complex domain. Here, we design for emergence by probing the system with safe-to-fail experiments and monitoring behaviour. Experiments are designed with the conditions of emergence in mind – diversity, feedback loops, self-organization. Coherence and Obliquity are two enabling constraints (think of container in Glenda Eoyang’s CDE model) that allow patterns of different behaviour to emerge.

Continual dynamic flow around the Cynefin Framework essentially means staying in beta. Our propensity is to begin with reductionism (Complicated and Obvious domains) due to the many years of formal schooling and training drilled into us. Thankfully, complexity science helps us to think holistically and signals us to change our methods and tools to engage people differently.

Cynefin Complicated domain work is diagnostic. Complex domain work is dialogic. This is the new Dialogic OD perspective that folks like Peggy Holman are exploring and why stories are preferred over surveys and interviews.

Design for Emergence was given a deeper focus at a Cynefin Retreat held at Whistler BC in June 2018. Ann Pendleton-Jullian introduced “scaffold, not structure” to enable emergent thinking. She has co-authored a book Pragmatic Imagination. This book is the last chapter of a larger work in a soon to be published five-book system of books called Design Unbound: Designing for Emergence in a White Water World. 

Her concept and framework is based on six principles: 

  1. The imagination serves diverse cognitive processes as an entire spectrum of activity.
  2. The imagination both resolves and widens the gap between what is unfamiliar – new/novel/strange – and what is known. This gap increases along the spectrum from left to right. Within the range of abductive reasoning, there is a shift from using the imagination for sense-making to sense-breaking, where one first widens the gap and then resolves it with the imagination (see diagram below).
  3. The Pragmatic Imagination pro-actively imagines the actual in light of meaningful purposeful possibilities. It sees opportunity in everything.
  4. The Pragmatic Imagination sees thought and action as indivisible and reciprocal. Therefore it is part of all cognitive activity that serves thought and action for anticipating, and thought and action for follow-through.
  5. The imagination must be instrumentalized to turn ideas into action – the entire spectrum of the imagination. And the generative/poïetic/sometimes-disruptive side of the spectrum is especially critical in a world that requires radically new visions and actions.
  6. Because the imagination is not under conscious control, we need to understand, find, and design ways to set it in motion and scaffold it for play and purpose.

The last principle on scaffolding really resonates with me, especially as a professional engineer. It was cool to make the connection between using scaffolding to build skyscrapers and using scaffolding to mentally enable imaginative ideas to play with each other and build something entirely new. I’ll be adding scaffolding into my 21st century toolkit.

Safety Differently: Recipe Follower and/or Chef?

Over at SafetyDifferently.com, Sidney Dekker provides some enlightening background how he coined the term “Safety Differently”  in 2012.  While many pertinent thoughts are expressed, the key message for me is below:

So people ask about Safety Differently ‘How do I do this.’ But what they really want an answer to is the question ‘What do I do now?’ What they really want is someone to tell them, because they haven’t taken the time to think it through, to study the ideas further, to show curiosity and discover the difficulties and adaptive triumphs of frontline work for themselves. They just want other people to tell them what to do. That is literally taking a Safety I mindset to a Safety II world. Of course, the ‘how’-to-get-to-Safety-Differently question is increasingly getting answered in the expanding menu of method options—from embedded discovery to micro-experiments, collective improvements, appreciative inquiry and more. But not the ‘what’ question.

Giving you, or anyone, the ‘what’of the procedural steps, milestones and content for the implementation of anything (including Safety II or Safety Differently) would fundamentally negate what Safety Differently is. There is no intellectual shortcut into a simple procedure for the application of Safety Differently. If there was, it wouldn’t be Safety Differently. It would be Safety I. In Safety I, after all, you have to be willing to hand over your brain, your expertise, your experience, to someone else who has already written the solution for you. You don’t have to think, you just receive and apply. Follow the procedure, stick with the rule, do the checklist that someone else has filled with things they believe are important, so that you can see whether you’re on track according to their definition of that ‘track.’

It’s an important distinction but not an easy one to grasp especially when dominant paradigms are deeply entrenched or as we  say “Fish discover water last.”

To help people flip the switch, I sometimes will use cooking as an analogy. I’ll ask: Do you want to be a Recipe Follower or a Chef? Recipes are written to be easily repeated. Anyone can follow. Practice and expertise increase success. And you get a standardized result. There are lots of recipes in Safety-I.
What happens though if you don’t have all the recipe ingredients at hand? Or someone above demands you must cut the baking time in half? As a Recipe Follower you would be confused, stymied, even paralyzed. A Chef, however, would accept the challenge and adapt to the unexpected conditions. A Chef doesn’t follow a cookbook but knows the art and principles of cooking.

Samin Nosrat in “Salt Fat, Acid, Heat” explains how salt enhances flavour, fat amplifies flavour and makes appealing textures possible, acid brightens and balances, and heat determines the texture of food.

Safety Differently isn’t a cookbook but a new view of perceiving people as resources, the capacity to change and the margin of manoeuvre to make adjustments.
I believe we want workers to be both Recipe Followers and Chefs. The key is understanding when the current situation calls for abandoning the recipe and putting on the chef’s hat.

The Future of Change Management

At the Organizational Change Network on LinkedIn, Ron Leeman posted an article on the continuing argument about traditional Change Management being “old skool” and that it needs a re-think, an overhaul, some fresh ideas etc. He researched CM methods currently being offered by a handful of leading consulting organizations. His conclusion was apart from how new digital tools can help with some aspects of Change Management, he didn’t think there is a lot of new thinking out there. Rather it looked like just a regurgitation and/or re-naming of previous approaches.
I replied what if there was an emerging change practice that wasn’t a regurgitation but quite different as per the following:
  • What if a change practice emerged that treated all organizations as complex adaptive systems? It would mean escaping the dominant human-imposed Engineering paradigm (faster, better, cheaper)  and setting aside age-old tools such as reductionism, benchmarking, future state visioning, cause & effect analysis, linear road mapping, surveys, and yes, even metrics to a certain degree.
  • The change practice would be built on an Ecological paradigm applying ideas and words such as Anthro-complexity, Cynefin, Liminality, Morphogenesis, enabling constraints, managing the evolutionary potential of the Present.
  • The change practice would be informed by Natural science – what we have learned from observing Nature in action: Messy coherence, Homeostasis, Natural Resilience, Mutating containers, Exaptation, Biomimicry.
  • The change practice would leverage real world Complexity phenomena: Emergence, Diversity, Viral Butterfly Effect, Non-linear Tipping Point, Self-organization, Stigmergy, Pareto Power Law Risk (fat tail).
  • The change practice would recognize people are Homo Narrans: Dialogic sense-making, Distributed ethnography, narrative fragments, Thick Data, Disintermediation.
  • The change practice would understand the concept of Homo Faber – use of tools to shape a complex environment: Distributed cognition, Chaordic teaming, Safe-to-fail experiments, Weak signal detection, Obliquity, Asymmetric co-evolution, Scaffolding, Nudging, Fractal management.
  • The change practice would recognize humans like to play creative games (Homo Luden): Pattern recognition, Strange attractors, Non-hypothesis abduction, Wicked problems, Serendipity.
  • The change practice would be pragmatic: Conceptual blending, Adjacent Possible, Satisficing, Heuristics, Phronesis, Praxis.
As many of you know, what I outlined was the complexity-based approach to implement change during unpredictable, constantly changing times.
As Dave Snowden explained, you can view the real world in terms of 3 basic systems: Order, Complex, Chaotic. The 20th century was dominated by Order system thinking. Many change practices are  designed for a work environment that is stable, consistent, and where cause & effect relationships exist. The future is deemed predictable and possibly extends from past history. The popular image of jigsaw puzzle parts being put together is apropos. If a change project fits in this environment, one can confidently carry on using a linear step-by-step command and control mindset.
In a complex system the puzzle parts are constantly moving or even missing. Furthermore, a complex adaptive system will see humans adapting by evolving relationships and adjusting emotional interactions. If your change project faces uncertainty, unpredictability, ambiguity, think twice about using Order system CM tools. They really aren’t built for uncontrollable turbulence and volatility.
In 2000, Stephen Hawking stated the new century is the Age of Complexity. It’s getting close to two decades. The time is ripe, perhaps overdue, to update the Future of Change Management.

The Future of Safety

Today I had the privilege and pleasure of speaking at the BCCGA AGM.  A copy of the slides presented can be downloaded here. In my conclusion I posed 4 questions for the BCCGA and its member organizations to consider.

1. What paradigm(s) should our safety vision be based upon?

The evolution of safety thinking can be viewed through 4 Ages.

The recurring theme is about how Humans were treated as new technologies were implemented into business practices. It’s logical that the changes in safety thinking mirror the evolution of Business Practices. The Ages of Technology, Human Factors, and Safety Management are rooted in an Engineering paradigm.

It’s Systems Thinking with distinct parts: People, Process, Technology. Treat them separately and then put them together to deliver a Strategy.  Reductionism works well when the system is stable, consistent, and relatively fixed by constraints imposed by humans (e.g., regulations, policies, standards, rules). However, in addition to ORDER, there are 2 other systems: COMPLEX and CHAOTIC in the real world. These two are constantly changing so a reductionistic approach is not appropriate. One must work holistically with an Ecological paradigm.

This diagram from the Cynefin Centre shows the relative sizing of the 3 systems. Complexity by far is the largest and continues to grow.  All organizations are complex adaptive system. A worthy safety vision must include the Age of Cognitive Complexity and view Safety as an emergent property of a complex adaptive system. The different thinking means rules don’t create safety but create the conditions that enable safety to emerge. Now we can understand why piling on more and more rules can lead to cognitive overload in workers and enable danger, not safety, to emerge.

2. How should we treat workers – as problems to be managed or solutions to be harnessed?

The Age of Technology and Age of Human Factors treated workers as problems – as cogs in a machine and as hazards to be controlled. The Age of Safety Management view recognizes that rules cannot cover every situation. Variability isn’t a threat but a necessity. We need to trust that humans always try to do what they think is right in the situation. The Age of Cognitive Complexity appreciates that humans think differently than logical information-processing machines in an Engineering paradigm. Humans are not rational thinkers; decisions are based on emotional reactions & heuristic shortcuts. As storytellers, people can articulate thick data that a typical report is unable to provide.  As solution providers, workers can call upon tacit knowledge – difficult to transfer to another person by means of writing it down or verbalizing it. Workers who feel like cogs or hazards tend to stay within themselves for fear of punishment. Knowledge is volunteered; never conscripted.

3. What safety heuristics can we share?

While Best Practices manuals are beneficial,  heuristics are on a  bigger stage when dealing with decisions. Humans make 95% of their decisions using heuristics. Heuristics are mental shortcuts to help people make quick, satisfactory but not perfect decisions.

They are the rules of thumb that Masters pass on to their Apprentices. Organizations ought to have a means to collect Safety-II success stories and use pattern recognition tools. Heuristics that emerge can be distributed to Masters for accuracy scrutiny.

4. How can we get more safety stories like these, fewer stories like those?

This question pertains to a new way of shaping a safety vision through the use of narratives (stories, pictures, voice recordings, drawings, sketches, etc.)

Narratives are converted into data points to generate a 2D contour map or fitness landscape
Each dot is a story and seen together they form patterns. The map shows the general direction we want to head – top right corner (High compliance with rules & High level of getting the job done). Clearly we want more safety stories in the Green area.  We also want fewer in the Red and Brown areas. Here’s the rub: If we try to go directly for the top right corner, we won’t get there.  This is ATTITUDE mapping at a level way deeper than observable BEHAVIOUR. Instead we head for an Adjacent Possible.
We get people to tell more stories here, fewer there  by changing a human constraint. It might be loosening a controlling constraint like a rule or practice. It could also be introducing an enabling constraint like a new tool or process.
We gather more stories and monitor how the clusters are changing in real-time. The evolving landscape maps a new Present state – a new starting point. We then change another constraint. Since we can’t predict outcomes both positive and unintended negative consequences might emerge. We accelerate the positives and dampen the negatives. In essence we co-evolve our way to the top right corner of the map. This is how we shape our Safety Culture.

7 Implications of Complexity for Safety

One of my favourite articles is The Complexity of Failure written by Sidney Dekker, Paul Cilliers, and Jan-Hendrik Hofmeyr.  In this posting I’d like to shed more light on the contributions of Paul Cillliers.

Professor Cilliers was a pioneering thinker on complexity working across both the humanities and the sciences. In 1998 he published Complexity and Postmodernism: Understanding Complex Systems which offered implications of complexity theory for our understanding of biological and social systems. Sadly he suddenly passed away in 2011 at the much too early age of 55 due to a massive brain hemorrhage.

My spark for writing comes from a blog recently penned by a complexity colleague Sonja Bilgnaut.  I am following her spade work by exploring  the implications of complexity for safety. Cilliers’ original text is in italics.

  1. Since the nature of a complex organization is determined by the interaction between its members, relationships are fundamental. This does not mean that everybody must be nice to each other; on the contrary. For example, for self-organization to take place, some form of competition is a requirement (Cilliers, 1998: 94-5). The point is merely that things happen during interaction, not in isolation.
  • Because humans are natural storytellers, stories are a widely used  interaction between fellow workers, supervisors, management, and executives. We need to pay attention to  stories told about daily experiences since they provide a strong signal of the present safety culture.
  • We should devote less time trying to change people and their behaviour and more time building relationships.  Despite what psychometric profiling offers, humans are too emotional and unpredictable to accurately figure out. In my case, I am not a trained psychologist so my dabbling trying to change how people tick might be dangerous, on the edge of practising pseudoscience.  I prefer to stay with the natural sciences (viz., physics, biology), the understanding of phenomena in Nature which have evolved over thousands of years.
  • If two workers are in conflict, don’t demand that they both smarten up. Instead, change the nature of relationship so that their interactions are different or even extinguished. Simple examples are changing the task or moving one to another crew.
  • Interactions go beyond people. Non-human agents include machines, ideas (rules, policies, regs) and events (meeting, incident). A worker following a safety rule can create a condition to enable safety to emerge. Too many safety rules can overwhelm and frustrate a worker enabling danger to emerge.

2. Complex organizations are open systems. This means that a great deal of energy and information flows through them, and that a stable state is not desirable.

  • A company’s safety management system (SMS) is a closed system.  In the idealistic SMS world,  stability, certainty, and predictability are the norms. If a deviation occurs, it needs to be controlled and managed. Within the fixed boundaries, we apply reductionistic thinking and place information into a number of safety categories, typically ranging from 4 to 10. An organizational metaphor is sorting solid LEGO bricks under different labels.
    In an open system, it’s different. Think of boundary-less fog and irreducible mayonnaise. If you outsource to a contractor or partner with an external supplier, how open is your SMS? Will you insist on their compliance or draw borders between firms? Do their SMS safety categories blend with yours?
  • All organisations are complex adaptive systems. Adaptation means not lagging behind and plunging into chaotic fire-fighting. It means looking ahead and not only trying to avoid things going wrong, but also trying to ensure that they go right. In the field, workers when confronted by unexpected varying conditions will adjust/adapt their performance to enable success (and safety) to emerge.
  • When field adjustments occasionally fail, it results in a new learning to be shared as a story. This is also why a stable state is not desirable. In a stable state, very little learning is necessary. You just repeat doing what you know.

3. Being open more importantly also means that the boundaries of the organization are not clearly defined. Statements of “mission” and “vision” are often attempts to define the borders, and may work to the detriment of the organization if taken too literally. A vital organization interacts with the environment and other organizations. This may (or may not) lead to big changes in the way the organization understands itself. In short, no organization can be understood independently of its context.

  • Mission and Vision statements are helpful in setting direction. A vector, North Arrow, if you like. They become detrimental if communicated as some idealistic future end state the organization must achieve.
  • Being open is different than “thinking out of the box” because there really is no box to start with. It’s a contextual connection of relationships with other organizations. It’s also a foggy because some organizations are hidden. You can impact organizations that you don’t even know  about and conversely, their unbeknownst actions can constrain you.
    The smart play is to be mindful by staying focused on the Present and monitor desirable and undesirable outcomes as they emerge.

4. Along with the context, the history of an organization co-determines its nature. Two similar-looking organizations with different histories are not the same. Such histories do not consist of the recounting of a number of specific, significant events. The history of an organization is contained in all the individual little interactions that take place all the time, distributed throughout the system.

  • Don’t think about creating a new safety mission or vision by starting with a blank page, a clean sheet, a greenfield.  The organization has history that cannot be erased. The Past should be honoured, not forgotten.
  • Conduct an ongoing challenge of best practices and Life-saving rules. Remember the historical reasons why these were first installed. Then question if these reasons remain valid.
  • Be aware of the part History plays when rolling out a safety initiative across an organization.
    • If it’s something that everyone genuinely agrees to and wants, then just clone & replicate. Aggregation is the corollary of reductionism and it is the common approach to both scaling and integration. Liken it to putting things into org boxes and then fitting them together like a jigsaw. The whole is equal to the sum of its parts.
    • But what if the initiative is controversial? Concerns are voiced, pushback is felt, resistance is real. Then we’re facing complexity where the properties of the safety system as a whole is not the sum of the parts but are unique to the system as a whole.
      If we want to scale capabilities we can’t just add them together. We need to pay attention to history and understand reactions like “It won’t work here”, “We tried that before”, “Oh no! Not again!”
      The change method is not to clone & replicate.  Start by honouring local context. Then decompose into stories to make sense of the culture. Discover what attracts people to do what they do. Recombine to create a mutually coherent solution.

5. Unpredictable and novel characteristics may emerge from an organization. These may or may not be desirable, but they are not by definition an indication of malfunctioning. For example, a totally unexpected loss of interest in a well-established product may emerge. Management may not understand what caused it, but it should not be surprising that such things are possible. Novel features can, on the other hand, be extremely beneficial. They should not be suppressed because they were not anticipated.

  • In the world of safety, failures are unpredictable and undesirable. They emerge when a hidden tipping point is reached.
    As part of an Emergency Preparedness plan, recovery crews with well-defined roles are designated. Their job is to fix the system as quickly as possible and safely restore it to its previous stable state.
  • Serendipity is an unintended but highly desirable consequence. This implies an organization should have an Opportunity crew ready to activate. Their job is to explore the safety opportunity, discover new patterns which may lead to a new solution, and exploit their benefits.
    At a tactical level, the new solution may be a better way of achieving the Mission and Vision. In the same direction but a different path or route.
    At a strategic level, the huge implication is that new opportunity may lead to a better future state than the existing carefully crafted, well-intentioned one. Decision-makers are faced with a dilemma: do we stay the course or will we adapt and change our vector?
  • Avoid introducing novel safety initiatives as big events kicked off with a major announcement. These tend to breed cynicism especially if the company history includes past blemished efforts. Novelty means you honestly don’t know what the outcomes will be since it will be a new experience to those you know (identified stakeholders) and those you don’t know in the foggy network.
    Launch as a small experiment.
    If desirable consequences are observed, accelerate the impact by widening the scope.
    If unintended negative consequences emerge, quickly dampen the impact or even shut it down.
    As noted in (2), constructively de-stabilize the system in order to learn.

6. Because of the nonlinearity of the interactions, small causes can have large effects. The reverse is, of course, also true. The point is that the magnitude of the outcome is not only determined by the size of the cause, but also by the context and by the history of the system. This is another way of saying that we should be prepared for the unexpected. It also implies that we have to be very careful. Something we may think to be insignificant (a casual remark, a joke, a tone of voice) may change everything. Conversely, the grand five-year plan, the result of huge effort, may retrospectively turn out to be meaningless. This is not an argument against proper planning; we have to plan. The point is just that we cannot predict the outcome of a certain cause with absolute clarity.

  • The Butterfly effect is a phenomenon of a complex adaptive system. I’m sure many blog writers like myself are hoping that our safetydifferently cause will go viral, “cross the chasm”, and be adopted by the majority. Sonja in her blog refers to a small rudder that determines the direction of even the largest ship. Perhaps that’s what we are: trimtabs!
  • On the negative side, think of a time when an elected official or CEO made a casual remark about a safety disaster only to have it go viral and backfire. In 2010 Deep Horizon disaster then CEO Tony Hayward called the amount of oil and dispersant “relatively tiny” in comparison with the “very big ocean”.  Hayward’s involvement has left him a highly controversial public figure.
  • Question: Could a long-term safety plan to progress through the linear stages of a Safety Culture Maturity model be a candidate as a meaningless five-year plan?
    If a company conducts an employee early retirement or buy-out program, does it regress and fall down a stage or two?
    If a company deploys external contractors with high turnover, does it ever get off the bottom rung?
    Instead of a linear progression model, stay in the Present and listen to the stories internal and external workers are telling. With the safety Vision in mind, ask what can we do to hear more stories like these, fewer stories like those.
    As the stories change, so will the safety culture.  Proper planning is launching small experiments to shape the culture.

7. Complex organizations cannot thrive when there is too much central control. This certainly does not imply that there should be no control, but rather that control should be distributed throughout the system. One should not go overboard with the notions of self-organization and distributed control. This can be an excuse not to accept the responsibility for decisions when firm decisions are demanded by the context. A good example here is the fact that managers are often keen to “distribute” the responsibility when there are unpopular decisions to be made—like retrenchments—but keen to centralize decisions when they are popular.

  • I’ve noticed safety professionals are frequent candidates for organization pendulum swings. One day you’re in Corporate Safety. Then an accident occurs and in the ensuing investigation a recommendation is made to move you into the field to be closer to the action. Later a new Director of Safety is appointed and she chooses to centralize Safety.
    Pendulum swings are what Robert Fritz calls Corporate Tides, the natural ebb and flow of org structure evolution.
  • Central v distributed control changes are more about governance/audit rather than workflow purposes. No matter what control mechanism is in vogue, it should enable stigmergic behaviour, the natural forming of network clusters to share knowledge, processes, and practices.
  • In a complex adaptive system, each worker is an autonomous decision-maker, a solution not a problem. Decisions made are based on information at hand (aka tacit knowledge) and if not available, knowing who, where, how to access it. Every worker has a knowledge cluster in the network. A safety professional positioned in the field can mean quicker access but more importantly, stronger in-person interactions. This doesn’t discount a person in Head Office who has a trusting relationship from being a “go to” guy. Today’s video conferencing tools can place the Corp Safety person virtually on site in a matter of minutes.
Thanks, Sonja. Thanks, Paul.
Note: If you have any comments, I would appreciate if you would post them at safetydifferently.com.

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSaveSaveSave

SaveSaveSaveSave

SaveSave

Safety Differently

My thanks to Peter Caulfield for interviewing me and writing an article in the Journal of Commerce on a different view of safety.


Veteran Vancouver engineer and consultant Gary Wong says the safety industry needs to reexamine its goals and how to accomplish them if it wants to keep workers safe and at the same time make them productive.

Wong’s approach, called Safety Differently, is based on what he says is a more realistic take on what goes on in the workplace.

“Industry standards and practices typically evolve based on what we learn from failures,” says Wong. “But evolution in the safety industry has been slow and continues to follow the old idea that safety is only the absence of people getting hurt.”

That approach, says Wong, is based on the belief that humans must be controlled with compliance rules and procedures.

“If an accident occurs, we automatically look for the people to blame and then punish them through discipline or termination,” Wong says. “Experts today promote the idealistic goal of zero harm, so it isn’t surprising workers are confused if a safety dilemma arises.”

Safety Differently on the other hand credits workers for getting things right, which he says they do most of the time.

“Safety Differently sees people as the solution and safety as an ethical responsibility,” says Wong. “It recognizes that safety is not something that is created, but emerges out of a complex adaptive system.”

When facing an unexpected change, people will adjust their actions accordingly, he says. In most cases, their adjustment will keep them stay safe.

But an unexpected change can also be dangerous, and, if a tipping point is reached, an incident can happen.

“Safety Differently focuses on hidden non-linear tipping point signals and how humans sense impending danger,” Wong says.

“It boosts the capacity of people to handle their activities safely and successfully under different conditions.”

Ron Gantt, vice-president of SCM Safety Inc. in San Ramon, Calif., says there is a big difference between Safety Differently and the old way of doing safety.

“The old safety model focuses on regulations and takes an adversarial approach,” Gantt says. “Safety Differently, on the other hand, is more collaborative, with more worker participation in finding solutions that prevent accidents.”

Safety Differently is based on three principles, Gantt says.

“First, it is a forward-looking, predictive tool,” he says.

“It looks ahead to prevent accidents in the future, not backward at accidents that happened in the past. Its purpose is to build the capacity to be successful from now on and as conditions change.”

Safety Differently’s second operating principle is that people are the solution, not the problem.

“People are instinctive risk managers and they have an innate ability for creative problem-solving,” Gantt says. “Let’s trust them to do the right thing. Unfortunately, there’s not a lot of trust in the old safety model.”

Third, the people at the top of an organization should view safety as an ethical responsibility.

“They need to be curious about what their employees want and make an effort to satisfy them,” Gantt says.
Safety Differently is needed, he says, because the world is becoming more interdependent and complex and small changes can have huge effects.

Support for Safety Differently is growing, he adds.

“Many safety professionals are frustrated with the old way of doing things,” Gantt says.

At the same time, there is resistance from people and groups with a vested interest in maintaining the status quo.

“They are likely to say that the way to reduce the number of workplace injuries and deaths is to keep things the old way but to try harder,” he says.

Erik Hollnagel, a Danish academic and expert in system safety and human reliability analysis, advocates the application of “synesis to safety.” The term means the same thing as synthesis, or bringing together.

“The effort to ensure that work goes well and that the number of acceptable outcomes is as high as possible requires a unification of priorities, perspectives and practices,” says Hollnagel.

“Synesis brings together all these practices to produce outcomes that satisfy more than one priority and even reconciles multiple priorities.”

Many sectors of the economy conflate safety and quality or safety and productivity, Hollnagel says.

“We can look at a process or work situation from a safety point of view, from a quality point of view or from a productivity point of view,” Hollnagel says.

“But we should keep in mind that any individual point of view reveals only part of what is going on and that it is necessary to understand what is going on as a whole.”

Using Cynefin to publish a book

It’s been some time since I last blogged on my website. It’s not because I’ve grown tired of complexity and safety; it’s mainly due to my  involvement with friends to publish a book about an amazing man who dedicated over 50 years on the University of British Columbia campus. The target was achieved: The Age of Walter Gage: How One Canadian Shaped the Lives of Thousands.  This particular blog is not about the book  but  how Cynefin  dynamics  & cadence was put to good use.

When the book idea took hold in early 2016, it wasn’t a surprise that we started in the Cynefin Complicated domain. We certainly did not qualify as experts in producing a book but as “expert” engineers schooled in systems thinking, we all had a propensity to set a desired future state target and build a project plan by linearly working backwards. We at least were cognizant we needed the right set of talent and skills – writing, photo compilation , book editing, publication, distribution. The first milestone on the roadmap was a book publishing firm that would assume these activities in their entirety. Then we could manage the project in the Complicated domain using a “waterfall” approach.

I volunteered to build a companion website (open network platform) to collect stories (narrative research). My blogging efforts would focus on engaging storytellers and spreading the news about our Walter Gage book project. We literally had no clue who had stories and how many there were. All that we knew was that time was not on our side so there was an urgency to contact storytellers before life took its natural toll.

The prompt question for stories was simple: “a personal or professional experience that sheds light on how Walter Gage impacted you.” While written stories were requested, we did receive other narrative fragments – a voice recording, photos and letters.

Could I have signified the stories with triads and dyads to later search for patterns? Yes, but  it would have required team education and, of course, more work (probably unappreciated) by storytellers. Instead, we chose to rely on the hired author’s vast experience to read the stories and extract themes worth highlighting in the book.

While I was busy gathering stories and narrative fragments, other team members were approaching several publishers with our book idea.  While we were told our pitch was for a noble cause and commendable, nobody signed on.  We learned that our  “business case” did not provide sufficient ROI as a money-making opportunity.

Drat. Our path was broken. The roadmap led us to a dead end. Being resilient, we shifted into the above diagram’s “Yellow loop” to reset our thinking.  We decided to deploy a self-publishing strategy and search for resources who could help us make our book a reality. It also meant more work on our part.  It was intriguing to observe the team’s need to “self-organize.” We were divided into 2 sub teams- Book Creation and Marketing. Was there a concern for the silo effect? Yes, but like physical silos on a farm which are ventilated, we continued to meet often as an overall team to enable venting to take place.

Due to our lack of knowledge and practical experience, I knew our cadence would be between Cynefin Complicated and Complex domains (the “Blue loop”). Whenever a totally unexpected unintended consequence emerged, we would move into the Complex domain. With the Engineer’s disposition to immediately “fix” a problem, patience was necessary to make sense of outcomes and explore options. BTW, not all consequences were negative. One UBC grad came forth and surprised us with a major donation. Serendipity at its finest!

I introduced different software tools to the team. Some worked, some didn’t. I opened a Trello board to track our progress under the 2 sub teams. It was great for storing documents and having them available at a meeting with a couple of clicks. However,  I ran into objections regarding too many email notifications being received. I also learned that not all team members wanted the full picture, just happy to do their tasks. I eventually deleted half the team with the balance remaining on the app to stay abreast. Chalk it up as a safe-to-fail experiment.

Our primary online communication mode was Email, with all its pros and cons. “Reply to All” messages became problematic. One time we had a thread with over 72 responses. Talk about being on the Obvious/Chaotic boundary with a failure looking for a place to happen! Attachments were easily lost in the long threads. Fortunately with Trello I was able to access quickly and send them to members, as a separate new email of course.

“Email tag” had me thinking of introducing Slack to simplify communication but my Trello discovery led me to a “Don’t even think about it” conclusion. When navigating complexity, we can’t control human behaviour but can only influence the relationships and interactions amongst team members. In this case, I chose not to drop in Slack as a catalyst which would have certainly disrupted communication patterns but, who knows, maybe enable worse patterns to emerge.

We held two “by invitation only” project celebration events.  Planning was autonomic: Let’s issue invitations via email. After all, if you’re good with a hammer, everything looks like a nail. Hmm, if there’s a “best practice” in the Obvious domain, email tops the list.

Thankfully I was able to influence the team to go with Evite.com. Its messaging features enabled us to leverage feedback loops, a key phenomenon of complex systems. One attendee even went a step further by posting photos of the event on evite.com for everyone to enjoy. (Note to self: Use evite.com to manage the next class reunion instead of personal email account.)

We have our official book launch tomorrow, Feb 15th.  The beginning of the end. Or perhaps the end of the beginning since book promotion and marketing now ramps up. Either way, I plan to invest more time pushing the boundaries on complexity and safety, from a natural sciences perspective.

 

 

 

 

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

Why a Complexity-based Safety Audit makes sense

Imagine you work in a company with a good safety record. By “good”,  you are in the upper quartile as per the benchmarking stats in your industry.  Things were rolling along nicely until this past year. There was a steep increase in failures which has led to concerns over the safety culture.Universe made of stories

Historically there have 2 safety-related events but last year there were 10. Accident investigation reports show it’s not one category but several: Bodily reaction and exertion, Contact with equipment, Misuse of hazardous materials, Falls and falling objects. Fortunately there were no fatalities; most were classified as medical aids but one resulted in a serious injury.   Three medical aid injuries were from contacting moving equipment, two were related to improper tool and glove use. The serious injury was due to a worker falling off a ladder.

What upsets you is that the pre-job briefing did not identify the correct glove or the proper use of hazardous materials. You also read the near-miss incidents and heard disturbing rumours from the grapevine that a some recent close calls went unreported. Something needs to be done but what should you do?

One option is to do a safety audit. It will be highly visible and show executives and workers you mean business.  Phase 1 will consist of conducting an assessment and developing action plans to close any performance gaps. The gaps typically concentrate on strengthening safety robustness – how well practices follow safety policies, systems, standards, regulations, rules to avoid known failures. Phase 2 will implement the action plans to ensure that actions are being completed with quality and in a timely manner. A survey will gauge worker response. The safety audit project will end with a report that details the completion of the actions and observations on how the organization has responded to the implementation of the plan.

For optics reasons, you are considering hiring an external consultant with safety expertise. This expert ideally would know what to look for and through interviews and field observations pinpoint root causes. Action plans will be formulated to close the gaps.  If done carefully, no blame will be attached anybody. To ensure no one or group is singled out, any subsequent compliance training and testing will be given to all employees. Assuming all goes well, you can turn the page, close the chapter, and march on assuming all is well.  Or is it?

You hesitate because you’ve experienced safety audits before. Yes, there are short-term improvements (Hawthorne effect?) but eventually you noticed that people drifted back to old habits and patterns. Failure (personal injury or damage to equipment, tools, facilities) didn’t happen until years later, well after all the audit hubbub had dissipated. A bit of “what-iffing” is making you pause about going down the safety audit path again:

  1. What if the external safety consultants are trapped by their expertise because they already believe they have the solution and see  the job as implementing their solution and making it work? That is, what if they are great at using a hammer and therefore see everything as a nail, including a screw?
  2. What if the safety audit is built around a position that is the consultant’s ideal future state but not ours?
  3. What if the survey questionnaire is designed to validate what the safety consultants have seen in the past?
  4. What if front-line workers are reluctant to answer questions during interviews for feelings of being put on trial, fear of being blamed, or worse, subjects in a perceived witch hunt?
  5. What if safety personnel,  supervisors, managers, executives are reluctant to answer questions during interviews or complete survey questionnaires for the fear of being held accountable for failures under their watch?
  6. What if employees feel it’s very unsettling to have someone looking their shoulders recording field observations? What if the union complains because it’s deemed a regression to the Scientific Management era (viz., Charlie Chaplin’s movie ‘Modern Times’)?
  7. What if the performance gaps identified are measured against Safety Management System (SMS) outcomes that are difficult to quantify (e.g., All personnel must report near-miss incidents at all times)?
  8. What if we develop an action plan and during implementation realize the assumptions made about the future are wrong?
  9. What if during implementation a better solution emerges than the one recommended?
  10. What if the expenditure on a safety audit just reinforces what we know and nothing new is learned?

Are there other options besides a traditional safety audit? Yes, there is. And it’s different.

A sense-making approach boosts the capacity of people and organizations to handle their activities successfully, under varying conditions. It recognizes the real world is replete with safety paradoxes and dilemmas that workers must struggle with on a daily basis. The proven methods are pragmatic and make sense of complexity in safety in order to act. The stories gathered from the workforce including contractors often go beyond safety robustness (preventing failure) and provide insights into the company’s level of safety resilience. Resilience is the ability to quickly recover after a failure, speedily implement an unanticipated opportunity arising from an event, and respond early to an alert that a major catastrophe might be looming over the horizon.

The paradigm is not as an expert with deep knowledge of best practices in safety but as an anthropologist informed by the historical evolution of safety practices. The Santa Fe Institute noted companies operate in industries which are complex adaptive systems (CAS). Safety is not a product nor a service; it is an emergent property of a complex adaptive system. For instance, safety rules enable safety to emerge but too many rules can overwhelm workers and create confusion. If a tipping point is reached, danger emerges in the form of workers doing workarounds or deliberately ignoring rules to get work done.

Anthropologists believe culture answers can best be found by engaging the total workforce. The sense-making consultant’s role is to understand the decisions people have made. Elevating behaviour similarities and differences can highlight what forces are at play that influence people to choose to stay within compliance boundaries or take calculated risks.

By applying complexity-based thinking, here’s how  the what-if concerns listed above are addressed.

  1. Escape expertise entrapment.
    There are no preconceived notions or solutions. As ethnographers, observations that  describe the safety culture are recorded. Stories are easy to capture since people are born storytellers. Stories add context, can describe complex situations, and emotionally engage humans.
  2. Be mindful.
    You can only act to change the Present. Therefore, attention is placed on the current situation and not some ideal future state that may or may not materialize.
  3. Stay clear of cognitive dissonance.
    This leads to the confirmation bias — the often unconscious act of referencing only those perspectives that fuel pre-existing views.
    There are no survey questionnaires. Questions asked are simple prompts to help workers get started in sharing their stories. Stories are very effective in capturing decisions people must make dealing with unexpected varying conditions such as conflicting safety rules, lack of proper equipment, tension amongst safety, productivity, and legal compliance.
  4. Avoid confrontation.
    Front-line workers are not required to answer audit questions. They have the trust and freedom to tell any story they wish. It’s what matters most to them, not what a safety expert thinks is important and needs to interrogate.
  5. Treat everyone the same.
    Safety personnel,  supervisors, managers, executives also get to tell their stories. Their behaviours and interactions play a huge role in shaping the safety culture.  There is no “Them versus Us”; it’s anyone and everyone in the CAS.
  6. Make it easy and comfortable.
    There is minimal uneasiness with recording field observations since workers choose the topics.  A story with video might be showing what goes wrong or what goes right.  If union agents are present, they are welcomed to tell their safety stories and add diversity to  the narrative mosaic.
  7. Be guided by the compass, not the clock.
    Performance improvement is achieved by focusing on direction, not targeted SMS outcomes. This avoids the dilemma of workers through their stories identifying SMS as a problem.  Direction comes from asking: “Where do we want fewer stories like these and more stories like that?” The effectiveness of an performance improvement intervention is measured by the shift in subsequent stories told.
  8. Choose safe-to-fail over fail-safe.
    Avoid the time and effort developing a robust fail-safe action plan and then weakening it with CYA assumptions. When dealing with uncertainty and ambiguity, probe the CAS with  safe-to-fail experiments. This is the essence behind Nudge theory, introducing small interventions to influence behaviour changes.
  9. Sail, not rail.
    Think of navigating a ship on a uncontrollable sea of complexity from driving a train on a controllable track of certainty. Deviation manoeuvres like tacking and jibing  are expected. By designing actions to be small, emergence of surprising consequences can be better handled. Positive serendipitous opportunities heading in the desired direction can be immediately seized. On the other hand,  negative  consequences are quickly dampened.
  10. Focus on what you don’t know.
    A sense-making approach opens the individual’s and thus the company’s mindset to Knowledge (known knowns) as well as Ignorance (unknowns, unknowables).  New learning comes from exploring Ignorance. By sensing different behaviour patterns that emerge from the nudges, it becomes clearer why people behave the way they do. This discovery may lead to new ways to strengthen safety robustness + build safety resilience. This is managing the evolutionary potential of the Present, one small step at a time.

If you’re tired of doing same old, same old, then it’s time to conduct an “audit” on your safety audit approach and choose to do safety differently. Click here for more thoughts on safety audits.

A Complexity-based approach to Climate Change

I live in British Columbia, a province that began implementing a Climate Action Plan in 2008. Last year citizens were invited to submit their ideas and thoughts to help an appointed Climate Leadership team refresh the plan. The message I offered was that  climate change is a complex, not  a complicated problem.

You can analyze a complicated problem by breaking it down into parts, examining each piece separately, fixing it, and then putting the pieces all back together. In other words, the whole is equal to the sum of its parts.

In contrast, a complex problem cannot be reduced into parts but must be analyzed holistically because of the relationships amongst the various known and unknown pieces. The whole is greater than the sum of its parts.

The main lessons taught at colleges and universities focus on Newtonian physics, reductionism, cause & effect, linearity. Complexity science is only 30 years so it’s not surprising that concepts such as emergence, diversity, feedback loops, strange attractors, pattern recognition, self-organization, non-linearity thinking remain on the sidelines. Yet these phenomena of complexity are spoken in everyday language: going viral, butterfly effect, wisdom of crowds, tipping point, serendipity, Black Swans.

We are taught how to thinking critically and value being competent at arguing to defend our position. We apply deductive and inductive reasoning to win our case. Sadly, little time is invested learning how apply abductive reasoning and explore adaptation and exaptation to evolve a complex issue.

“I think the next century will be the century of complexity.”
Stephen Hawking January 2000

We are 15 years into the century of complexity. My submission applied complexity science to today’s climate change issues:

1. Climate change is a complex issue, not a complicated one. The many years of training has steered us to analyze parts as reductionists. Think of mayonnaise. You can’t break it down to analyze the ingredients. So spread holistically.

2. Stay open-minded. Delay the desire to converge and stop new information from entering. Don’t lock into some idealistic future state strategic plan. Remember, once you think you have the answer, you’re in trouble.

3. Don’t be outcome-based and establish destination targets. Be direction- oriented and deliberately ambiguous to enable new possibilities to emerge.

4. Adopt a sense-making approach – make sense of the present situation in order to act upon it.

Here is the link to the Climate Leadership team’s report and 32 recommendations. The word “complexity” appears twice. Hmm.

As an engaged BC citizen, I will carry on being a skeptic in a good sense and voice my opinions when I see a hammer nailing a screw.

Safety Culture, the Movie

SWForce
It’s the holiday season. One terrific way to celebrate as a family is to see a movie together. Our pick? Star Wars: the Force Awakens.
Well, that was an easy decision. The next one is harder though…what sort of experience do we want? Will it be UltraAVX, D-Box, IMAX, 3D, VIP, Dolby ATMOS or Surround sound, or standard digital? While sorting through the movie options, for some reason I began thinking about safety culture and had an epiphany. Safety culture is a movie, not a photo.

A photo would be a Star Wars poster, a single image that a designer has artistically constructed. It’s not the full story, just a teaser aimed to influence people to buy a ticket. We understand this and don’t expect to comprehend the entire picture from one poster. A safety culture survey or audit should be treated in the same fashion. All we see is a photo, a snapshot capturing a moment in time. Similar to the poster artist, a survey designer also has a preconceived idea; influence is in the form of questions. The range of questions extends from researched “best practices” to personal whim.

I believe this is a major limitation of the survey/audit/poster. It could totally miss what people actually experience as a movie. A movie introduces visual motion and audible sound to excite our human senses and release emotions and feelings. We can watch behaviours as well as their positive and negative consequences being delivered. With flow we are able to sense operating point movement and drift into the zone of complacency and eventual failure. A safety culture has sound that a photo cannot reveal. In a movie you can hear loud communication, quiet conversations, or lack of (e.g., cone of silence).

If we were to create “Safety Culture, the Movie”, what would we need to consider? I’ve compiled a short list. What would you add?

  • Human actor engagement
    • Actors on screen – lead characters, supporting players, cameo appearances, cast in crowds; front-line workers, supervisors, safety professionals, public at large
    • Actors behind the screen – investors, producer, director, music arranger, theatre owners, craft guilds; Board, execs, project managers, suppliers, unions
    • Actors in front of the screen – paying audience, theatre staff, movie critics; customers, safety associations, regulatory inspectors
  • Story line
    • Safety culture is one big story
    • Safety culture movie is neverending
    • Within the one big story are several side stories, episodes, subplots
  • Relationships between characters and roles played
    • Heroes, villains, maidens in distress, comic relief, clones
    • Contact is continuous and relationships can shift over time (compare to a snapshot audit focusing on one scene at a particular time slot)
    • What seems in the beginning to be independent interactions are often interconnected (“Luke…I am your father”) and may lead to a dilemma or paradox later
  • Theme
    • Overt message of the safety culture movie – Good triumphs Evil? Might makes Right? Focus on what goes wrong? Honesty? Respect?
    • Hidden messages – Resistance is futile? Pay attention to outliers? Do what I say, not what I do? It’s all about the optics
    • Significance of myths, legends, rituals in the safety culture – the Dark side, Jedi order, zero harm workplace, behaviour-based safety
  • Critique
    • What can we learn if our movie is scored on a Flixter (Rotten tomatoes) scale out of 100?
      • What does a score of 30, 65, 95 tell us about performance? Success?
      • We can learn from each critic and fan comment standing on its own rather than dealing with a mathematical average
    • Feedback will influence and shape the ongoing movie
      • Too dark, not enough SFX, too many safety rules, not enough communication
    • Artifacts
      • A poster provides a few while a movie contains numerous for the discerning eye
      • Besides the artifacts displayed on screen, many are revealed in the narratives actors share with each other off screen during lunch hours, breaks, commutes
      • What might we further learn by closely examining artifacts? For instance, what’s the meaning behind…
        • Leia’s new hairdo (a new safety compliance policy)?
        • The short, funny looking alien standing next to R2D2 and C3PO (a safety watcher?)
        • Why Han Solo can’t be separated from his blaster (just following a PPE rule)?
        • Rey using some kind of staff weapon, perhaps similar to the staffs used by General Grievous’ body guards in Episode III (is it SOP certified)?
        • The star destroyers eliminating the control towers which caused them so many problems in the Battle of Endor (implementation of an accident investigation recommendation)?
        • Improvement in the X-Wing fighters (a safety by design change after an action review with surviving pilots?)

If you’d like to share your thoughts and comments, I suggest entering them at the safety differently.com website.

Season’s greetings, everyone! And may the Force be with you.