Category Archives: Strategy

7 Implications of Complexity for Safety

One of my favourite articles is The Complexity of Failure written by Sidney Dekker, Paul Cilliers, and Jan-Hendrik Hofmeyr.  In this posting I’d like to shed more light on the contributions of Paul Cillliers.

Professor Cilliers was a pioneering thinker on complexity working across both the humanities and the sciences. In 1998 he published Complexity and Postmodernism: Understanding Complex Systems which offered implications of complexity theory for our understanding of biological and social systems. Sadly he suddenly passed away in 2011 at the much too early age of 55 due to a massive brain hemorrhage.

My spark for writing comes from a blog recently penned by a complexity colleague Sonja Bilgnaut.  I am following her spade work by exploring  the implications of complexity for safety. Cilliers’ original text is in italics.

  1. Since the nature of a complex organization is determined by the interaction between its members, relationships are fundamental. This does not mean that everybody must be nice to each other; on the contrary. For example, for self-organization to take place, some form of competition is a requirement (Cilliers, 1998: 94-5). The point is merely that things happen during interaction, not in isolation.
  • Because humans are natural storytellers, stories are a widely used  interaction between fellow workers, supervisors, management, and executives. We need to pay attention to  stories told about daily experiences since they provide a strong signal of the present safety culture.
  • We should devote less time trying to change people and their behaviour and more time building relationships.  Despite what psychometric profiling offers, humans are too emotional and unpredictable to accurately figure out. In my case, I am not a trained psychologist so my dabbling trying to change how people tick might be dangerous, on the edge of practising pseudoscience.  I prefer to stay with the natural sciences (viz., physics, biology), the understanding of phenomena in Nature which have evolved over thousands of years.
  • If two workers are in conflict, don’t demand that they both smarten up. Instead, change the nature of relationship so that their interactions are different or even extinguished. Simple examples are changing the task or moving one to another crew.
  • Interactions go beyond people. Non-human agents include machines, ideas (rules, policies, regs) and events (meeting, incident). A worker following a safety rule can create a condition to enable safety to emerge. Too many safety rules can overwhelm and frustrate a worker enabling danger to emerge.

2. Complex organizations are open systems. This means that a great deal of energy and information flows through them, and that a stable state is not desirable.

  • A company’s safety management system (SMS) is a closed system.  In the idealistic SMS world,  stability, certainty, and predictability are the norms. If a deviation occurs, it needs to be controlled and managed. Within the fixed boundaries, we apply reductionistic thinking and place information into a number of safety categories, typically ranging from 4 to 10. An organizational metaphor is sorting solid LEGO bricks under different labels.
    In an open system, it’s different. Think of boundary-less fog and irreducible mayonnaise. If you outsource to a contractor or partner with an external supplier, how open is your SMS? Will you insist on their compliance or draw borders between firms? Do their SMS safety categories blend with yours?
  • All organisations are complex adaptive systems. Adaptation means not lagging behind and plunging into chaotic fire-fighting. It means looking ahead and not only trying to avoid things going wrong, but also trying to ensure that they go right. In the field, workers when confronted by unexpected varying conditions will adjust/adapt their performance to enable success (and safety) to emerge.
  • When field adjustments occasionally fail, it results in a new learning to be shared as a story. This is also why a stable state is not desirable. In a stable state, very little learning is necessary. You just repeat doing what you know.

3. Being open more importantly also means that the boundaries of the organization are not clearly defined. Statements of “mission” and “vision” are often attempts to define the borders, and may work to the detriment of the organization if taken too literally. A vital organization interacts with the environment and other organizations. This may (or may not) lead to big changes in the way the organization understands itself. In short, no organization can be understood independently of its context.

  • Mission and Vision statements are helpful in setting direction. A vector, North Arrow, if you like. They become detrimental if communicated as some idealistic future end state the organization must achieve.
  • Being open is different than “thinking out of the box” because there really is no box to start with. It’s a contextual connection of relationships with other organizations. It’s also a foggy because some organizations are hidden. You can impact organizations that you don’t even know  about and conversely, their unbeknownst actions can constrain you.
    The smart play is to be mindful by staying focused on the Present and monitor desirable and undesirable outcomes as they emerge.

4. Along with the context, the history of an organization co-determines its nature. Two similar-looking organizations with different histories are not the same. Such histories do not consist of the recounting of a number of specific, significant events. The history of an organization is contained in all the individual little interactions that take place all the time, distributed throughout the system.

  • Don’t think about creating a new safety mission or vision by starting with a blank page, a clean sheet, a greenfield.  The organization has history that cannot be erased. The Past should be honoured, not forgotten.
  • Conduct an ongoing challenge of best practices and Life-saving rules. Remember the historical reasons why these were first installed. Then question if these reasons remain valid.
  • Be aware of the part History plays when rolling out a safety initiative across an organization.
    • If it’s something that everyone genuinely agrees to and wants, then just clone & replicate. Aggregation is the corollary of reductionism and it is the common approach to both scaling and integration. Liken it to putting things into org boxes and then fitting them together like a jigsaw. The whole is equal to the sum of its parts.
    • But what if the initiative is controversial? Concerns are voiced, pushback is felt, resistance is real. Then we’re facing complexity where the properties of the safety system as a whole is not the sum of the parts but are unique to the system as a whole.
      If we want to scale capabilities we can’t just add them together. We need to pay attention to history and understand reactions like “It won’t work here”, “We tried that before”, “Oh no! Not again!”
      The change method is not to clone & replicate.  Start by honouring local context. Then decompose into stories to make sense of the culture. Discover what attracts people to do what they do. Recombine to create a mutually coherent solution.

5. Unpredictable and novel characteristics may emerge from an organization. These may or may not be desirable, but they are not by definition an indication of malfunctioning. For example, a totally unexpected loss of interest in a well-established product may emerge. Management may not understand what caused it, but it should not be surprising that such things are possible. Novel features can, on the other hand, be extremely beneficial. They should not be suppressed because they were not anticipated.

  • In the world of safety, failures are unpredictable and undesirable. They emerge when a hidden tipping point is reached.
    As part of an Emergency Preparedness plan, recovery crews with well-defined roles are designated. Their job is to fix the system as quickly as possible and safely restore it to its previous stable state.
  • Serendipity is an unintended but highly desirable consequence. This implies an organization should have an Opportunity crew ready to activate. Their job is to explore the safety opportunity, discover new patterns which may lead to a new solution, and exploit their benefits.
    At a tactical level, the new solution may be a better way of achieving the Mission and Vision. In the same direction but a different path or route.
    At a strategic level, the huge implication is that new opportunity may lead to a better future state than the existing carefully crafted, well-intentioned one. Decision-makers are faced with a dilemma: do we stay the course or will we adapt and change our vector?
  • Avoid introducing novel safety initiatives as big events kicked off with a major announcement. These tend to breed cynicism especially if the company history includes past blemished efforts. Novelty means you honestly don’t know what the outcomes will be since it will be a new experience to those you know (identified stakeholders) and those you don’t know in the foggy network.
    Launch as a small experiment.
    If desirable consequences are observed, accelerate the impact by widening the scope.
    If unintended negative consequences emerge, quickly dampen the impact or even shut it down.
    As noted in (2), constructively de-stabilize the system in order to learn.

6. Because of the nonlinearity of the interactions, small causes can have large effects. The reverse is, of course, also true. The point is that the magnitude of the outcome is not only determined by the size of the cause, but also by the context and by the history of the system. This is another way of saying that we should be prepared for the unexpected. It also implies that we have to be very careful. Something we may think to be insignificant (a casual remark, a joke, a tone of voice) may change everything. Conversely, the grand five-year plan, the result of huge effort, may retrospectively turn out to be meaningless. This is not an argument against proper planning; we have to plan. The point is just that we cannot predict the outcome of a certain cause with absolute clarity.

  • The Butterfly effect is a phenomenon of a complex adaptive system. I’m sure many blog writers like myself are hoping that our safetydifferently cause will go viral, “cross the chasm”, and be adopted by the majority. Sonja in her blog refers to a small rudder that determines the direction of even the largest ship. Perhaps that’s what we are: trimtabs!
  • On the negative side, think of a time when an elected official or CEO made a casual remark about a safety disaster only to have it go viral and backfire. In 2010 Deep Horizon disaster then CEO Tony Hayward called the amount of oil and dispersant “relatively tiny” in comparison with the “very big ocean”.  Hayward’s involvement has left him a highly controversial public figure.
  • Question: Could a long-term safety plan to progress through the linear stages of a Safety Culture Maturity model be a candidate as a meaningless five-year plan?
    If a company conducts an employee early retirement or buy-out program, does it regress and fall down a stage or two?
    If a company deploys external contractors with high turnover, does it ever get off the bottom rung?
    Instead of a linear progression model, stay in the Present and listen to the stories internal and external workers are telling. With the safety Vision in mind, ask what can we do to hear more stories like these, fewer stories like those.
    As the stories change, so will the safety culture.  Proper planning is launching small experiments to shape the culture.

7. Complex organizations cannot thrive when there is too much central control. This certainly does not imply that there should be no control, but rather that control should be distributed throughout the system. One should not go overboard with the notions of self-organization and distributed control. This can be an excuse not to accept the responsibility for decisions when firm decisions are demanded by the context. A good example here is the fact that managers are often keen to “distribute” the responsibility when there are unpopular decisions to be made—like retrenchments—but keen to centralize decisions when they are popular.

  • I’ve noticed safety professionals are frequent candidates for organization pendulum swings. One day you’re in Corporate Safety. Then an accident occurs and in the ensuing investigation a recommendation is made to move you into the field to be closer to the action. Later a new Director of Safety is appointed and she chooses to centralize Safety.
    Pendulum swings are what Robert Fritz calls Corporate Tides, the natural ebb and flow of org structure evolution.
  • Central v distributed control changes are more about governance/audit rather than workflow purposes. No matter what control mechanism is in vogue, it should enable stigmergic behaviour, the natural forming of network clusters to share knowledge, processes, and practices.
  • In a complex adaptive system, each worker is an autonomous decision-maker, a solution not a problem. Decisions made are based on information at hand (aka tacit knowledge) and if not available, knowing who, where, how to access it. Every worker has a knowledge cluster in the network. A safety professional positioned in the field can mean quicker access but more importantly, stronger in-person interactions. This doesn’t discount a person in Head Office who has a trusting relationship from being a “go to” guy. Today’s video conferencing tools can place the Corp Safety person virtually on site in a matter of minutes.
Thanks, Sonja. Thanks, Paul.
Note: If you have any comments, I would appreciate if you would post them at safetydifferently.com.

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSaveSaveSave

SaveSaveSaveSave

SaveSave

Safety Differently

My thanks to Peter Caulfield for interviewing me and writing an article in the Journal of Commerce on a different view of safety.


Veteran Vancouver engineer and consultant Gary Wong says the safety industry needs to reexamine its goals and how to accomplish them if it wants to keep workers safe and at the same time make them productive.

Wong’s approach, called Safety Differently, is based on what he says is a more realistic take on what goes on in the workplace.

“Industry standards and practices typically evolve based on what we learn from failures,” says Wong. “But evolution in the safety industry has been slow and continues to follow the old idea that safety is only the absence of people getting hurt.”

That approach, says Wong, is based on the belief that humans must be controlled with compliance rules and procedures.

“If an accident occurs, we automatically look for the people to blame and then punish them through discipline or termination,” Wong says. “Experts today promote the idealistic goal of zero harm, so it isn’t surprising workers are confused if a safety dilemma arises.”

Safety Differently on the other hand credits workers for getting things right, which he says they do most of the time.

“Safety Differently sees people as the solution and safety as an ethical responsibility,” says Wong. “It recognizes that safety is not something that is created, but emerges out of a complex adaptive system.”

When facing an unexpected change, people will adjust their actions accordingly, he says. In most cases, their adjustment will keep them stay safe.

But an unexpected change can also be dangerous, and, if a tipping point is reached, an incident can happen.

“Safety Differently focuses on hidden non-linear tipping point signals and how humans sense impending danger,” Wong says.

“It boosts the capacity of people to handle their activities safely and successfully under different conditions.”

Ron Gantt, vice-president of SCM Safety Inc. in San Ramon, Calif., says there is a big difference between Safety Differently and the old way of doing safety.

“The old safety model focuses on regulations and takes an adversarial approach,” Gantt says. “Safety Differently, on the other hand, is more collaborative, with more worker participation in finding solutions that prevent accidents.”

Safety Differently is based on three principles, Gantt says.

“First, it is a forward-looking, predictive tool,” he says.

“It looks ahead to prevent accidents in the future, not backward at accidents that happened in the past. Its purpose is to build the capacity to be successful from now on and as conditions change.”

Safety Differently’s second operating principle is that people are the solution, not the problem.

“People are instinctive risk managers and they have an innate ability for creative problem-solving,” Gantt says. “Let’s trust them to do the right thing. Unfortunately, there’s not a lot of trust in the old safety model.”

Third, the people at the top of an organization should view safety as an ethical responsibility.

“They need to be curious about what their employees want and make an effort to satisfy them,” Gantt says.
Safety Differently is needed, he says, because the world is becoming more interdependent and complex and small changes can have huge effects.

Support for Safety Differently is growing, he adds.

“Many safety professionals are frustrated with the old way of doing things,” Gantt says.

At the same time, there is resistance from people and groups with a vested interest in maintaining the status quo.

“They are likely to say that the way to reduce the number of workplace injuries and deaths is to keep things the old way but to try harder,” he says.

Erik Hollnagel, a Danish academic and expert in system safety and human reliability analysis, advocates the application of “synesis to safety.” The term means the same thing as synthesis, or bringing together.

“The effort to ensure that work goes well and that the number of acceptable outcomes is as high as possible requires a unification of priorities, perspectives and practices,” says Hollnagel.

“Synesis brings together all these practices to produce outcomes that satisfy more than one priority and even reconciles multiple priorities.”

Many sectors of the economy conflate safety and quality or safety and productivity, Hollnagel says.

“We can look at a process or work situation from a safety point of view, from a quality point of view or from a productivity point of view,” Hollnagel says.

“But we should keep in mind that any individual point of view reveals only part of what is going on and that it is necessary to understand what is going on as a whole.”

Using Cynefin to publish a book

It’s been some time since I last blogged on my website. It’s not because I’ve grown tired of complexity and safety; it’s mainly due to my  involvement with friends to publish a book about an amazing man who dedicated over 50 years on the University of British Columbia campus. The target was achieved: The Age of Walter Gage: How One Canadian Shaped the Lives of Thousands.  This particular blog is not about the book  but  how Cynefin  dynamics  & cadence was put to good use.

When the book idea took hold in early 2016, it wasn’t a surprise that we started in the Cynefin Complicated domain. We certainly did not qualify as experts in producing a book but as “expert” engineers schooled in systems thinking, we all had a propensity to set a desired future state target and build a project plan by linearly working backwards. We at least were cognizant we needed the right set of talent and skills – writing, photo compilation , book editing, publication, distribution. The first milestone on the roadmap was a book publishing firm that would assume these activities in their entirety. Then we could manage the project in the Complicated domain using a “waterfall” approach.

I volunteered to build a companion website (open network platform) to collect stories (narrative research). My blogging efforts would focus on engaging storytellers and spreading the news about our Walter Gage book project. We literally had no clue who had stories and how many there were. All that we knew was that time was not on our side so there was an urgency to contact storytellers before life took its natural toll.

The prompt question for stories was simple: “a personal or professional experience that sheds light on how Walter Gage impacted you.” While written stories were requested, we did receive other narrative fragments – a voice recording, photos and letters.

Could I have signified the stories with triads and dyads to later search for patterns? Yes, but  it would have required team education and, of course, more work (probably unappreciated) by storytellers. Instead, we chose to rely on the hired author’s vast experience to read the stories and extract themes worth highlighting in the book.

While I was busy gathering stories and narrative fragments, other team members were approaching several publishers with our book idea.  While we were told our pitch was for a noble cause and commendable, nobody signed on.  We learned that our  “business case” did not provide sufficient ROI as a money-making opportunity.

Drat. Our path was broken. The roadmap led us to a dead end. Being resilient, we shifted into the above diagram’s “Yellow loop” to reset our thinking.  We decided to deploy a self-publishing strategy and search for resources who could help us make our book a reality. It also meant more work on our part.  It was intriguing to observe the team’s need to “self-organize.” We were divided into 2 sub teams- Book Creation and Marketing. Was there a concern for the silo effect? Yes, but like physical silos on a farm which are ventilated, we continued to meet often as an overall team to enable venting to take place.

Due to our lack of knowledge and practical experience, I knew our cadence would be between Cynefin Complicated and Complex domains (the “Blue loop”). Whenever a totally unexpected unintended consequence emerged, we would move into the Complex domain. With the Engineer’s disposition to immediately “fix” a problem, patience was necessary to make sense of outcomes and explore options. BTW, not all consequences were negative. One UBC grad came forth and surprised us with a major donation. Serendipity at its finest!

I introduced different software tools to the team. Some worked, some didn’t. I opened a Trello board to track our progress under the 2 sub teams. It was great for storing documents and having them available at a meeting with a couple of clicks. However,  I ran into objections regarding too many email notifications being received. I also learned that not all team members wanted the full picture, just happy to do their tasks. I eventually deleted half the team with the balance remaining on the app to stay abreast. Chalk it up as a safe-to-fail experiment.

Our primary online communication mode was Email, with all its pros and cons. “Reply to All” messages became problematic. One time we had a thread with over 72 responses. Talk about being on the Obvious/Chaotic boundary with a failure looking for a place to happen! Attachments were easily lost in the long threads. Fortunately with Trello I was able to access quickly and send them to members, as a separate new email of course.

“Email tag” had me thinking of introducing Slack to simplify communication but my Trello discovery led me to a “Don’t even think about it” conclusion. When navigating complexity, we can’t control human behaviour but can only influence the relationships and interactions amongst team members. In this case, I chose not to drop in Slack as a catalyst which would have certainly disrupted communication patterns but, who knows, maybe enable worse patterns to emerge.

We held two “by invitation only” project celebration events.  Planning was autonomic: Let’s issue invitations via email. After all, if you’re good with a hammer, everything looks like a nail. Hmm, if there’s a “best practice” in the Obvious domain, email tops the list.

Thankfully I was able to influence the team to go with Evite.com. Its messaging features enabled us to leverage feedback loops, a key phenomenon of complex systems. One attendee even went a step further by posting photos of the event on evite.com for everyone to enjoy. (Note to self: Use evite.com to manage the next class reunion instead of personal email account.)

We have our official book launch tomorrow, Feb 15th.  The beginning of the end. Or perhaps the end of the beginning since book promotion and marketing now ramps up. Either way, I plan to invest more time pushing the boundaries on complexity and safety, from a natural sciences perspective.

 

 

 

 

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

Why a Complexity-based Safety Audit makes sense

Imagine you work in a company with a good safety record. By “good”,  you are in the upper quartile as per the benchmarking stats in your industry.  Things were rolling along nicely until this past year. There was a steep increase in failures which has led to concerns over the safety culture.Universe made of stories

Historically there have 2 safety-related events but last year there were 10. Accident investigation reports show it’s not one category but several: Bodily reaction and exertion, Contact with equipment, Misuse of hazardous materials, Falls and falling objects. Fortunately there were no fatalities; most were classified as medical aids but one resulted in a serious injury.   Three medical aid injuries were from contacting moving equipment, two were related to improper tool and glove use. The serious injury was due to a worker falling off a ladder.

What upsets you is that the pre-job briefing did not identify the correct glove or the proper use of hazardous materials. You also read the near-miss incidents and heard disturbing rumours from the grapevine that a some recent close calls went unreported. Something needs to be done but what should you do?

One option is to do a safety audit. It will be highly visible and show executives and workers you mean business.  Phase 1 will consist of conducting an assessment and developing action plans to close any performance gaps. The gaps typically concentrate on strengthening safety robustness – how well practices follow safety policies, systems, standards, regulations, rules to avoid known failures. Phase 2 will implement the action plans to ensure that actions are being completed with quality and in a timely manner. A survey will gauge worker response. The safety audit project will end with a report that details the completion of the actions and observations on how the organization has responded to the implementation of the plan.

For optics reasons, you are considering hiring an external consultant with safety expertise. This expert ideally would know what to look for and through interviews and field observations pinpoint root causes. Action plans will be formulated to close the gaps.  If done carefully, no blame will be attached anybody. To ensure no one or group is singled out, any subsequent compliance training and testing will be given to all employees. Assuming all goes well, you can turn the page, close the chapter, and march on assuming all is well.  Or is it?

You hesitate because you’ve experienced safety audits before. Yes, there are short-term improvements (Hawthorne effect?) but eventually you noticed that people drifted back to old habits and patterns. Failure (personal injury or damage to equipment, tools, facilities) didn’t happen until years later, well after all the audit hubbub had dissipated. A bit of “what-iffing” is making you pause about going down the safety audit path again:

  1. What if the external safety consultants are trapped by their expertise because they already believe they have the solution and see  the job as implementing their solution and making it work? That is, what if they are great at using a hammer and therefore see everything as a nail, including a screw?
  2. What if the safety audit is built around a position that is the consultant’s ideal future state but not ours?
  3. What if the survey questionnaire is designed to validate what the safety consultants have seen in the past?
  4. What if front-line workers are reluctant to answer questions during interviews for feelings of being put on trial, fear of being blamed, or worse, subjects in a perceived witch hunt?
  5. What if safety personnel,  supervisors, managers, executives are reluctant to answer questions during interviews or complete survey questionnaires for the fear of being held accountable for failures under their watch?
  6. What if employees feel it’s very unsettling to have someone looking their shoulders recording field observations? What if the union complains because it’s deemed a regression to the Scientific Management era (viz., Charlie Chaplin’s movie ‘Modern Times’)?
  7. What if the performance gaps identified are measured against Safety Management System (SMS) outcomes that are difficult to quantify (e.g., All personnel must report near-miss incidents at all times)?
  8. What if we develop an action plan and during implementation realize the assumptions made about the future are wrong?
  9. What if during implementation a better solution emerges than the one recommended?
  10. What if the expenditure on a safety audit just reinforces what we know and nothing new is learned?

Are there other options besides a traditional safety audit? Yes, there is. And it’s different.

A sense-making approach boosts the capacity of people and organizations to handle their activities successfully, under varying conditions. It recognizes the real world is replete with safety paradoxes and dilemmas that workers must struggle with on a daily basis. The proven methods are pragmatic and make sense of complexity in safety in order to act. The stories gathered from the workforce including contractors often go beyond safety robustness (preventing failure) and provide insights into the company’s level of safety resilience. Resilience is the ability to quickly recover after a failure, speedily implement an unanticipated opportunity arising from an event, and respond early to an alert that a major catastrophe might be looming over the horizon.

The paradigm is not as an expert with deep knowledge of best practices in safety but as an anthropologist informed by the historical evolution of safety practices. The Santa Fe Institute noted companies operate in industries which are complex adaptive systems (CAS). Safety is not a product nor a service; it is an emergent property of a complex adaptive system. For instance, safety rules enable safety to emerge but too many rules can overwhelm workers and create confusion. If a tipping point is reached, danger emerges in the form of workers doing workarounds or deliberately ignoring rules to get work done.

Anthropologists believe culture answers can best be found by engaging the total workforce. The sense-making consultant’s role is to understand the decisions people have made. Elevating behaviour similarities and differences can highlight what forces are at play that influence people to choose to stay within compliance boundaries or take calculated risks.

By applying complexity-based thinking, here’s how  the what-if concerns listed above are addressed.

  1. Escape expertise entrapment.
    There are no preconceived notions or solutions. As ethnographers, observations that  describe the safety culture are recorded. Stories are easy to capture since people are born storytellers. Stories add context, can describe complex situations, and emotionally engage humans.
  2. Be mindful.
    You can only act to change the Present. Therefore, attention is placed on the current situation and not some ideal future state that may or may not materialize.
  3. Stay clear of cognitive dissonance.
    This leads to the confirmation bias — the often unconscious act of referencing only those perspectives that fuel pre-existing views.
    There are no survey questionnaires. Questions asked are simple prompts to help workers get started in sharing their stories. Stories are very effective in capturing decisions people must make dealing with unexpected varying conditions such as conflicting safety rules, lack of proper equipment, tension amongst safety, productivity, and legal compliance.
  4. Avoid confrontation.
    Front-line workers are not required to answer audit questions. They have the trust and freedom to tell any story they wish. It’s what matters most to them, not what a safety expert thinks is important and needs to interrogate.
  5. Treat everyone the same.
    Safety personnel,  supervisors, managers, executives also get to tell their stories. Their behaviours and interactions play a huge role in shaping the safety culture.  There is no “Them versus Us”; it’s anyone and everyone in the CAS.
  6. Make it easy and comfortable.
    There is minimal uneasiness with recording field observations since workers choose the topics.  A story with video might be showing what goes wrong or what goes right.  If union agents are present, they are welcomed to tell their safety stories and add diversity to  the narrative mosaic.
  7. Be guided by the compass, not the clock.
    Performance improvement is achieved by focusing on direction, not targeted SMS outcomes. This avoids the dilemma of workers through their stories identifying SMS as a problem.  Direction comes from asking: “Where do we want fewer stories like these and more stories like that?” The effectiveness of an performance improvement intervention is measured by the shift in subsequent stories told.
  8. Choose safe-to-fail over fail-safe.
    Avoid the time and effort developing a robust fail-safe action plan and then weakening it with CYA assumptions. When dealing with uncertainty and ambiguity, probe the CAS with  safe-to-fail experiments. This is the essence behind Nudge theory, introducing small interventions to influence behaviour changes.
  9. Sail, not rail.
    Think of navigating a ship on a uncontrollable sea of complexity from driving a train on a controllable track of certainty. Deviation manoeuvres like tacking and jibing  are expected. By designing actions to be small, emergence of surprising consequences can be better handled. Positive serendipitous opportunities heading in the desired direction can be immediately seized. On the other hand,  negative  consequences are quickly dampened.
  10. Focus on what you don’t know.
    A sense-making approach opens the individual’s and thus the company’s mindset to Knowledge (known knowns) as well as Ignorance (unknowns, unknowables).  New learning comes from exploring Ignorance. By sensing different behaviour patterns that emerge from the nudges, it becomes clearer why people behave the way they do. This discovery may lead to new ways to strengthen safety robustness + build safety resilience. This is managing the evolutionary potential of the Present, one small step at a time.

If you’re tired of doing same old, same old, then it’s time to conduct an “audit” on your safety audit approach and choose to do safety differently. Click here for more thoughts on safety audits.

A Complexity-based approach to Climate Change

I live in British Columbia, a province that began implementing a Climate Action Plan in 2008. Last year citizens were invited to submit their ideas and thoughts to help an appointed Climate Leadership team refresh the plan. The message I offered was that  climate change is a complex, not  a complicated problem.

You can analyze a complicated problem by breaking it down into parts, examining each piece separately, fixing it, and then putting the pieces all back together. In other words, the whole is equal to the sum of its parts.

In contrast, a complex problem cannot be reduced into parts but must be analyzed holistically because of the relationships amongst the various known and unknown pieces. The whole is greater than the sum of its parts.

The main lessons taught at colleges and universities focus on Newtonian physics, reductionism, cause & effect, linearity. Complexity science is only 30 years so it’s not surprising that concepts such as emergence, diversity, feedback loops, strange attractors, pattern recognition, self-organization, non-linearity thinking remain on the sidelines. Yet these phenomena of complexity are spoken in everyday language: going viral, butterfly effect, wisdom of crowds, tipping point, serendipity, Black Swans.

We are taught how to thinking critically and value being competent at arguing to defend our position. We apply deductive and inductive reasoning to win our case. Sadly, little time is invested learning how apply abductive reasoning and explore adaptation and exaptation to evolve a complex issue.

“I think the next century will be the century of complexity.”
Stephen Hawking January 2000

We are 15 years into the century of complexity. My submission applied complexity science to today’s climate change issues:

1. Climate change is a complex issue, not a complicated one. The many years of training has steered us to analyze parts as reductionists. Think of mayonnaise. You can’t break it down to analyze the ingredients. So spread holistically.

2. Stay open-minded. Delay the desire to converge and stop new information from entering. Don’t lock into some idealistic future state strategic plan. Remember, once you think you have the answer, you’re in trouble.

3. Don’t be outcome-based and establish destination targets. Be direction- oriented and deliberately ambiguous to enable new possibilities to emerge.

4. Adopt a sense-making approach – make sense of the present situation in order to act upon it.

Here is the link to the Climate Leadership team’s report and 32 recommendations. The word “complexity” appears twice. Hmm.

As an engaged BC citizen, I will carry on being a skeptic in a good sense and voice my opinions when I see a hammer nailing a screw.

Danger was the safest thing in the world if you went about it right

I am now contributing safety thoughts and ideas on safetydifferently.com. Here is a reprint of my initial posting. If you wish to add a comment, I suggest you first read the other comments at safetydifferently.com and then includes yours at the end to join the conversation.

Danger was the safest thing in the world if you went about it right

This seemingly paradoxical statement was penned by Annie Dillard. She isn’t a safety professional nor a line manager steeped in safety experiences. Annie is a writer who in her book The Writing Life became fascinated by a stunt pilot, Dave Rahm.

“The air show announcer hushed. He had been squawking all day, and now he quit. The crowd stilled. Even the children watched dumbstruck as the slow, black biplane buzzed its way around the air. Rahm made beauty with his whole body; it was pure pattern, and you could watch it happen. The plane moved every way a line can move, and it controlled three dimensions, so the line carved massive and subtle slits in the air like sculptures. The plane looped the loop, seeming to arch its back like a gymnast; it stalled, dropped, and spun out of it climbing; it spiraled and knifed west on one side’s wings and back east on another; it turned cartwheels, which must be physically impossible; it played with its own line like a cat with yarn.”

When Rahm wasn’t entertaining the audience on the ground, he was entertaining students as a geology professor at Western Washington State College. His fame to “do it right “ in aerobatics led to King Hussein recruiting him to teach the art and science to the Royal Jordanian stunt flying team. While in Jordan performing a maneuver, Rahm in his plane plummeted to the ground and burst into flames. The royal family and Rahm’s wife and son were watching. Dave Rahm was instantly killed.

After years and years of doing it right, something went wrong for Dave Rahm. How could have this happen? How can danger be the safest thing? Let’s turn our attention towards Resilience Engineering and the concept of Emergent Systems. By viewing Safety as an emergent property of a complex adaptive system, Dillard’s statement begins to make sense.

Clearly a stunt pilots pushes the envelope by taking calculated risks. He gets the job done which is to thrill the audience below. Rahm’s maneuver called “headache” was startling as the plane stalled and spun towards earth seemingly out of control. He then adjusted his performance to varying conditions to bring the plane safely under control. He wasn’t pre-occupied with what to avoid and what not to do. He knew in his mind what was the right thing to do.

Operating pointWe can apply Richard Cook’s modified Rasmussen diagram to characterize this deliberate moving the operating point towards failure but taking action to pull back from the edge of failure. As the op point moves closer to failure, conditions change enabling danger as a system property to emerge. To Annie Dillard this aggressive head into, pulling back action was how Danger was the safest thing in the world if you went about it right.

“Rahm did everything his plane could do: tailspins, four-point rolls, flat spins, figure 8’s, snap rolls, and hammerheads. He did pirouettes on the plane’s tail. The other pilots could do these stunts, too, skillfully, one at a time. But Rahm used the plane inexhaustibly, like a brush marking thin air.”

The job was to thrill people with acts that appeared dangerous. And show after show Dave Rahm pleased the crowd and got the job done. However, on his fatal ride, Rahm and his plane somehow reached a non-linear complexity phenomenon called the tipping point, a point of no return, and sadly paid the final price.

Have you encountered workers who behave like stunt pilots? A stunt pilot will take risks and fly as close to the edge as possible. If you were responsible for their safety or a consultant asked to make recommendations, what would you do? Would you issue a “cease and desist” safety bulletin? Add a new “safety first…”rule to remove any glimmers of workplace creativity? Order more compliance checking and inspections? Offer whistle-blowing protection? Punish stunt pilots?

On the other hand, you could appreciate a worker’s willingness to take risks, to adjust performance when faced with unexpected variations in everyday work. You could treat a completed task as a learning experience and encourage the worker to share her story. By showing Richard Cook’s video you could make stunt pilots very aware of the complacency zone and over time, how one can drift into failure. This could lead to an engaging conversation about at-risk vs. reckless behaviour.

How would you deal with workers who act as stunt pilots? Command & control? Educate & empower? Would you do either/or? Or do both/and?

Chucking SVA out of my Strategy toolbox

Shareholder Value Added (SVA): the singular goal of a company should be to maximize the return to shareholders. It was the big idea starting in the late 1970s and I was exposed to it during my MBA years. As bright eyed, bushy tailed (or was it bushy-eyed, bright tailed?) students we consumed the defining article  “Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure.”

As a young manager rising in the ranks, I learned my job was to not selfishly optimize benefits for myself and my staff but for the owners of the firm. I must admit I did have some troubles rationalizing the idea at the time. As the owner of a few common stocks, the concept resonated with me. Imagine that, employees working solely on my behalf. But the other thought in the back of my mind was that I was ready to dump the stock if something better came along or if I needed the cash. Employee loyalty, nice. My shareholder loyalty, uhh.

For the next 20 years up to his retirement in 2001, Jack Welch played the SVA theory to grow GE in the most valuable and largest company in the world. Others followed the form and function. Who was I to argue? Put the discomfort aside.

What emerged was the widespread use of stock options as executive compensation tools.  This led to execs concentrating on increasing stock price and Wall Street pushing for short-term myopic returns to satisfy shareholders. We also experienced questionable practices such as outsourcing to lower costs followed by using the savings to repurchase stock to drive up the price. And then there was out and out bad behaviour with C-suite manipulation of accounting numbers (e.g., Enron). The discomfort didn’t go away; it got worse.

In 2009 the discomfort loop was closed. Jack Welch stated “On the face of it, shareholder value is the dumbest idea in the world. Shareholder value is a result, not a strategy… your main constituencies are your employees, your customers and your products. Managers and investors should not set share price increases as their overarching goal. … Short-term profits should be allied with an increase in the long-term value of a company.” Begone SVA.

If SVA is now out, what’s taken its place in my toolbox?

S-curve evolution of Business

I favour John Kay’s Obliquity idea that firms should focus on Purpose not on profit or growth, market share to increase profits. Profit is still important but how to go about achieving it is different. You don’t establish a profit or market share goal and shoot for it as a target. Instead you set your sites on something better (like purpose); profit and market share are natural outcomes.

In a similar fashion, you don’t set Happiness as a personal goal. Happiness emerges as you strive to achieve a greater cause or contribution.

How does this fit with complexity thinking? Developing a safe-to-fail experiment in the Cynefin Complex Domain is obliquity. It’s an indirect approach to discovering patterns that can lead us to new ways and solutions to satisfy customers, employees, and yes, shareholders.

And now for something completely different

Screen Shot 2015-01-17 at 11.44.50 AM

Fans of Monty Python will quickly recognize this blog’s title. Avid followers will remember the Fish Slapping Dance.

 

Well, the time has come when many of us feel the need to slap traditional safety practices on the side of the head with a fish. If you agree, check out safetydifferently.com. What it’s about:

Safety differently. The words seem contradictory next to each other. When facing difficulties with a potential disastrous loss, it makes more sense to wish for more of what is already known. Safety should come from well-established ways of doing things, removed far from where control may be lost.

But as workplaces and systems become increasingly interactive, more tightly coupled, subject to a fast pace of technological change and tremendous economic pressures to be innovative and cutting-edge, such “well-known” places are increasingly rare. If they exist at all.

There is a growing need to do safety differently.

People and organisations need to adapt. Traditional approaches to safety are not well suited for this. They are, to a large extent built on linear ideas – ideas about tighter control of work and processes, of removing creativity and autonomy, of telling people to comply.

The purpose of this website is to celebrate new and different approaches to safety. More specifically it is about exploring approaches that boost the capacity of people and organisations to handle their activities successfully, under varying conditions. Some of the analytical leverage will come from theoretical advancements in fields such as adaptive safety, resilience engineering, complexity theory, and human factors. But the ambition is to stay close to pragmatic advancements that actually deliver safety differently, in action.

Also, bashing safety initiatives that are based on constraining people’s capacities will be a frequent ingredient.

I had the opportunity to share my thoughts and ideas on what  “safety differently” is. I have expanded on these and added a few reference links.

Safety differently is not blindly following a stepping stone path but taking the time to turn over each stone and challenging why is the stone here in the first place, what was the intent, is it still valid and useful.

During engineering school I was taught to ignore friction as a force and use first order approximate linear models. When measuring distance, ignore the curvature of earth and treat as a straight line.

Linear Reductionist SafetyMuch of this thinking has translated into safety. Examples include Heinrich’s safety triangle, Domino theory, and Reason’s Swiss Cheese model. Safety differently is acknowledging that the real world is non-linear, Pareto distributed. It’s understanding how complexity in terms of diversity, coherence, tipping points, strange attractors, emergence, Black Swans impacts safety practices. Safety differently is no longer seeing safety as a product or a service but as an emergent property in a complex adaptive system.

BigHistoryProject GoldilocksWe can learn a lot from the amazing Big History Project. Dave Christian talks about the Goldilocks conditions that lead to increased complexity. Safety differently looks at what are these “just right” conditions that allow safety to emerge.

Because safety is an emergent property, that means we can’t create, control, nor manage a safety culture. Safety differently researches modulators that might influence and shape a present culture. Modulators are different that Drivers. Drivers are cause & effect oriented (if we do this, we get that) whereas Modulators are catalysts we deliberately place into a system but cannot predict what will happen. It sounds a bit crazy but it’s done all the time. Think of a restaurant experimenting with a new menu item. The chef doesn’t actually know if it will be a success. Even if it fails, he will learn more about clientele food preferences. We call these safe-to-fail experiments where the knowledge gained is more than the cost of the probe.

Cynefin Framework
Cynefin Framework: Follow the Green line to Best Practices

Safety differently views best practices as a space where ideas go to die. It’s the final end point of a linear line. Why die? Because there is no feedback loop that allows change to occur. Why no feedback loop? Because by definition it’s best practices and no improvement is necessary! Unfortunately, a Goldilocks condition arises we call complacency and the phenomenon is a drift into failure.

Safety differently means using different strategies when navigating complexity. A good example is the Saudi culture that Ron mentioned re taking a roundabout method to get to a destination. This is a complexity strategy than John Kay wrote about in this book Obliquity [Click here to see his TEDx talk].

Robustness+Resilience

Safety differently means deploying abductive rather than traditional deductive and inductive reasoning.

Safety differently means adaptation + exaptation as well as robustness + resilience.

Heinrich Revisited

Safety differently is not dismissing the past. We honour it by keeping practices that continue to serve well. But we will abandon approaches, methods, and tools that have been now proven to be false, myths, fallacies by complexity science, cognitive science, and other holistic research.

 

Accident Prone

In the early 1900s, humans were considered accident prone; Management took actions to keep people from mishandling machines and upsetting efficiency. Today, we know differently: The purpose of machines is to augment human intelligence and release people from non-thinking labour roles.

Do you lead from the Past or lead to the Future?

Ahhah

Recently Loren Murray, Head of Safety for Pacific Brands in Australia penned a thought provoking blog on the default future, a concept from the book  ‘The Three Laws of Performance’. I came across the book a few years ago and digested it from a leader effectiveness standpoint. Loren does a nice job applying it to a safety perspective.

“During my career I noticed that safety professionals (and this included myself) have a familiar box of tricks. We complete risk assessments, enshrine what we learn into a procedure or SOP, train on it, set rules and consequences, ‘consult’ via toolboxes or committees and then observe or audit.

When something untoward happens we stop, reflect and somehow end up with our hands back in the same box of tricks writing more procedures, delivering more training (mostly on what people already know), complete more audits and ensure the rules are better enforced….harder, meaner, faster. The default future described in The Three Laws of Performance looked a lot like what I just described!

What is the default future? We like to think our future is untold, that whatever we envision for our future can happen….However for most of us and the organisations we work for, this isn’t the case. To illustrate. You get bitten by a dog when you are a child. You decide dogs are unsafe. You become an adult, have kids and they want a dog. Because of your experiences in the past it is unlikely you will get a dog for your kids. The future isn’t new or untold it’s more of the past. Or in a phrase, the past becomes our future. This is the ‘default future’.

Take a moment to consider this. It’s pretty powerful stuff with implications personally and organisationally. What you decide in the past will ultimately become your future.

How does this affect how we practice safety? Consider our trusty box of tricks. I spent years learning the irrefutable logic of things like the safety triangle and iceberg theory. How many times have I heard about DuPont’s safety journey? Or the powerful imagery of zero harm. The undeniable importance of ‘strong and visible’ leadership (whatever that means) which breeds catch phrases like safety is ‘priority number one’.

These views are the ‘agreement reality’ of my profession. These agreements have been in place for decades. I learnt them at school, they were confirmed by my mentors, and given credibility by our regulators and schooling system. Some of the most important companies in Australia espouse it, our academics teach it, students devote years to learning it, workers expect it…. Our collective safety PAST is really powerful.”

 
Loren’s blog caused me to reflect on the 3 laws and how they might be applied in a complexity-based safety approach. Let’s see how they can help us learn so that we don’t keep on repeating the past.
First Law of Performance
“How people perform correlates to how situations occur to them.”
It’s pretty clear that the paradigms which dominate current safety thinking view people as error prone or create problems working in idealistic technological systems, structures, and processes. Perplexed managers get into a “fix-it” mode by recalling what worked in the past and assume that is the solution going forward. It’s being mindful of perception blindness and opening both eyes.
Second Law of Performance
“How a situation occurs arises in language.”

As evidence-based safety analysts, we need to hear the language and capture the conversations. One way is the Narrative approach where data is collected in the form of stories. We may even go beyond words and collect pictures, voice recordings, water cooler snippets, grapevine rumours, etc. When we see everything as a collective, we can discover themes and patterns emerging. These findings could be the keys that lead to an “invented” future.

Third Law of Performance
“Future-based language transforms how situations occur to people.”

Here are some possible yet practical shifts you can start with right now:

  • Let’s talk less about inspecting to catch people doing the wrong things and talk more about Safety-II; i.e., focusing on doing what’s right.
  • Let’s talk less about work-as-imagined deviations and more about work-as-done adjustments; i.e., less blaming and more appreciating and learning how people adjust performance when faced with varying, unexpected conditions.
  • Let’s talk less about past accident statistics and injury reporting systems and talk more about sensing networks that trigger anticipatory awareness of non-predictable negative events.
  • Let’s talk less about some idealistic Future state vision we hope to achieve linearly in a few years and talk more about staying in the Present, doing more proactive listening, and responding to the patterns that emerge in the Now.
  • And one more…let’s talk less about being reductionists (breaking down a social-technical system into its parts) and talk more about being holistic and understanding how parts (human, machines, ideas, etc.) relate, interact, and adapt together in a complex work environment.

The “invented” future conceivably may be one that is unknowable and unimaginable today but will emerge with future-based conversations.

What are you doing as a leader today? Leading workers to the default future or leading them to an invented Future?

Click here to read Loren’s entire blog posting.

When thinking of Safety, think of coffee aroma

CoffeeSafety has always been a hard sell to management and to front-line workers because, as Karl Weick put forward, Safety is a dynamic non-event. Non-events are taken for granted. When people see nothing, they presume that nothing is happening and that nothing will continue to happen if they continue to act as before.

I’m now looking at Safety from a complexity science perspective as something that emerges when system agents interact. An example is aroma emerging when hot water interacts with dry coffee grinds. Emergence is a real world phenomenon that System Thinking does not address.

Safety-I and Safety-II do not create safety but provide the conditions for Safety to dynamically emerge. But as a non-event, it’s invisible and people see nothing. Just as safety can emerge, so can danger as an invisible non-event. What we see is failure (e.g., accident, injury, fatality) when the tipping point is reached. We can also reach a tipping point when we do much of a good thing. Safety rules are valuable but if a worker is overwhelmed by too many, danger in terms of confusion, distraction can emerge.

I see great promise in advancing the Safety-II paradigm to understand what are the right things people should be doing under varying conditions to enable safety to emerge.

For further insights into Safety-II, I suggest reading Steven Shorrock’s posting What Safety-II isn’t on Safetydifferently.com. Below are my additional comments under each point made by Steven with a tie to complexity science. Thanks, Steven.

Safety-II isn’t about looking only at success or the positive
Looking at the whole distribution and all possible outcomes means recognizing there is a linear Gaussian and a non-linear Pareto world. The latter is where Black Swans and natural disasters unexpectedly emerge.

Safety-II isn’t a fad
Not all Safety-I foundations are based on science. As Fred Manuelle has proven, Heinrich’s Law is a myth. John Burnham’s book Accident Prone offers a historical rise and fall of the accident proneness concept. We could call them fads but it’s difficult to since they have been blindly accepted for so long.

This year marks the 30th anniversary of the Santa Fe Institute where Complexity science was born. At the May 2012 Resilience Lab I attended, Erik Hollnagel and Richard Cook introduced the RMLA elements of Resilience engineering: Respond, Monitor, Learn, Anticipate. They fit with Cognitive-Edge’s complexity view of Resilience: Fast recovery (R), Rapid exploitation (M,L), Early detection (A). This alignment had led to one way to operationalize Safety-II.

Safety-II isn’t ‘just theory’
As a pragmatist, I tend to not use the word “theory” in my conversations. Praxis is more important to me instead of spewing theoretical ideas. When dealing with complexity, the traditional Scientific Method doesn’t work. It’s not deductive nor inductive reasoning but abductive. This is the logic of hunches based on past experiences  and making sense of the real world.

Safety-II isn’t the end of Safety-I
The focus of Safety-I is on robust rules, processes, systems, equipment, materials, etc. to prevent a failure from occurring. Nothing wrong with that. Safety-II asks what can we do to recover when failure does occur plus what can we do to anticipate when failure might happen.

Resilience can be more than just bouncing back. Why return to the same place only to be hit again? Early exploitation means finding a better place to bounce to. We call it “swarming” or Serendipity if an opportunity unexpectedly arises.

Safety-II isn’t about ‘best practice’
“Best” practice does exist but only in the Obvious domain of the Cynefin Framework. It’s the domain of intuition and the Thinking Fast in Daniel Kahneman’s book Thinking Fast and Slow. What’s the caveat with best practices? There’s no feedback loop. So people just carry on as they did before.  Some best practices become good habits. On the other hand, danger can emerge from the baddies and one will drift into failure.

Safety-II and Resilience is about catching yourself before drifting into failure. Being alert to detect weak signals (e.g., surprising behaviours, strange noises, unsettling rumours) and having physical systems and people networks in place to trigger anticipatory awareness.

Safety-II isn’t what ‘we already do’
“Oh, yes, we already do that!” is typically expressed by an expert. It might be a company’s line manager or a safety professional. There’s minimal value challenging the response.  You could execute an “expert entrainment breaking” strategy. The preferred alternative? Follow what John Kay describes in his book Obliquity: Why Our Goals are Best Achieved Indirectly.

Don’t even start by saying “Safety-II”. Begin by gathering stories and making sense of how things get done and why things are done a particular way. Note the stories about doing things the right way. Chances are pretty high most stories will be around Safety-I. There’s your data, your evidence that either validates or disproves “we already do”. Tough for an expert to refute.

Safety-II isn’t ‘them and us’
It’s not them/us, nor either/or, but both/and.  Safety-I+Safety-II. It’s Robustness + Resilience together.  We want to analyze all of the data available, when things go wrong and when things go right.

The evolution of safety can be characterized by a series of overlapping life cycle paradigms. The first paradigm was Scientific Management followed by the rise of Systems Thinking in the 1980s. Today Cognition & Complexity are at the forefront. By honouring the Past, we learn in the Present. We keep the best things from the previous paradigms and let go of the proven myths and fallacies.

Safety-II isn’t just about safety
Drinking a cup of coffee should be a total experience, not just tasting of the liquid. It includes smelling the aroma, seeing the Barista’s carefully crafted cream design, hearing the first slurp (okay, I confess.) Safety should also be a total experience.

Safety can emerge from efficient as well as effective conditions.  Experienced workers know that a well-oiled, smoothly running machine is low risk and safe. However, they constantly monitor by watching gauges, listening for strange noises, and so on. These are efficient conditions – known minimums, maximums, and optimums that enable safety to emerge. We do things right.

When conditions involve unknowns, unknowables, and unimaginables, the shift is to effectiveness. We do the right things. But what are these right things?

It’s about being in the emerging Present and not worrying about some distant idealistic Future. It’s about engaging the entire workforce (i.e., wisdom of crowds) so no hard selling or buying-in is necessary.  It’s about introducing catalysts to reveal new work patterns.  It’s about conducting small “safe-to-fail” experiments to  shift the safety culture. It’s about the quick implementation of safety solutions that people want now.

Signing off and heading to Starbucks.