Tag Archives: Experimentation

Safe-to-fail experimentation

7 Implications of Complexity for Safety

One of my favourite articles is The Complexity of Failure written by Sidney Dekker, Paul Cilliers, and Jan-Hendrik Hofmeyr.  In this posting I’d like to shed more light on the contributions of Paul Cillliers.

Professor Cilliers was a pioneering thinker on complexity working across both the humanities and the sciences. In 1998 he published Complexity and Postmodernism: Understanding Complex Systems which offered implications of complexity theory for our understanding of biological and social systems. Sadly he suddenly passed away in 2011 at the much too early age of 55 due to a massive brain hemorrhage.

My spark for writing comes from a blog recently penned by a complexity colleague Sonja Bilgnaut.  I am following her spade work by exploring  the implications of complexity for safety. Cilliers’ original text is in italics.

  1. Since the nature of a complex organization is determined by the interaction between its members, relationships are fundamental. This does not mean that everybody must be nice to each other; on the contrary. For example, for self-organization to take place, some form of competition is a requirement (Cilliers, 1998: 94-5). The point is merely that things happen during interaction, not in isolation.
  • Because humans are natural storytellers, stories are a widely used  interaction between fellow workers, supervisors, management, and executives. We need to pay attention to  stories told about daily experiences since they provide a strong signal of the present safety culture.
  • We should devote less time trying to change people and their behaviour and more time building relationships.  Despite what psychometric profiling offers, humans are too emotional and unpredictable to accurately figure out. In my case, I am not a trained psychologist so my dabbling trying to change how people tick might be dangerous, on the edge of practising pseudoscience.  I prefer to stay with the natural sciences (viz., physics, biology), the understanding of phenomena in Nature which have evolved over thousands of years.
  • If two workers are in conflict, don’t demand that they both smarten up. Instead, change the nature of relationship so that their interactions are different or even extinguished. Simple examples are changing the task or moving one to another crew.
  • Interactions go beyond people. Non-human agents include machines, ideas (rules, policies, regs) and events (meeting, incident). A worker following a safety rule can create a condition to enable safety to emerge. Too many safety rules can overwhelm and frustrate a worker enabling danger to emerge.

2. Complex organizations are open systems. This means that a great deal of energy and information flows through them, and that a stable state is not desirable.

  • A company’s safety management system (SMS) is a closed system.  In the idealistic SMS world,  stability, certainty, and predictability are the norms. If a deviation occurs, it needs to be controlled and managed. Within the fixed boundaries, we apply reductionistic thinking and place information into a number of safety categories, typically ranging from 4 to 10. An organizational metaphor is sorting solid LEGO bricks under different labels.
    In an open system, it’s different. Think of boundary-less fog and irreducible mayonnaise. If you outsource to a contractor or partner with an external supplier, how open is your SMS? Will you insist on their compliance or draw borders between firms? Do their SMS safety categories blend with yours?
  • All organisations are complex adaptive systems. Adaptation means not lagging behind and plunging into chaotic fire-fighting. It means looking ahead and not only trying to avoid things going wrong, but also trying to ensure that they go right. In the field, workers when confronted by unexpected varying conditions will adjust/adapt their performance to enable success (and safety) to emerge.
  • When field adjustments occasionally fail, it results in a new learning to be shared as a story. This is also why a stable state is not desirable. In a stable state, very little learning is necessary. You just repeat doing what you know.

3. Being open more importantly also means that the boundaries of the organization are not clearly defined. Statements of “mission” and “vision” are often attempts to define the borders, and may work to the detriment of the organization if taken too literally. A vital organization interacts with the environment and other organizations. This may (or may not) lead to big changes in the way the organization understands itself. In short, no organization can be understood independently of its context.

  • Mission and Vision statements are helpful in setting direction. A vector, North Arrow, if you like. They become detrimental if communicated as some idealistic future end state the organization must achieve.
  • Being open is different than “thinking out of the box” because there really is no box to start with. It’s a contextual connection of relationships with other organizations. It’s also a foggy because some organizations are hidden. You can impact organizations that you don’t even know  about and conversely, their unbeknownst actions can constrain you.
    The smart play is to be mindful by staying focused on the Present and monitor desirable and undesirable outcomes as they emerge.

4. Along with the context, the history of an organization co-determines its nature. Two similar-looking organizations with different histories are not the same. Such histories do not consist of the recounting of a number of specific, significant events. The history of an organization is contained in all the individual little interactions that take place all the time, distributed throughout the system.

  • Don’t think about creating a new safety mission or vision by starting with a blank page, a clean sheet, a greenfield.  The organization has history that cannot be erased. The Past should be honoured, not forgotten.
  • Conduct an ongoing challenge of best practices and Life-saving rules. Remember the historical reasons why these were first installed. Then question if these reasons remain valid.
  • Be aware of the part History plays when rolling out a safety initiative across an organization.
    • If it’s something that everyone genuinely agrees to and wants, then just clone & replicate. Aggregation is the corollary of reductionism and it is the common approach to both scaling and integration. Liken it to putting things into org boxes and then fitting them together like a jigsaw. The whole is equal to the sum of its parts.
    • But what if the initiative is controversial? Concerns are voiced, pushback is felt, resistance is real. Then we’re facing complexity where the properties of the safety system as a whole is not the sum of the parts but are unique to the system as a whole.
      If we want to scale capabilities we can’t just add them together. We need to pay attention to history and understand reactions like “It won’t work here”, “We tried that before”, “Oh no! Not again!”
      The change method is not to clone & replicate.  Start by honouring local context. Then decompose into stories to make sense of the culture. Discover what attracts people to do what they do. Recombine to create a mutually coherent solution.

5. Unpredictable and novel characteristics may emerge from an organization. These may or may not be desirable, but they are not by definition an indication of malfunctioning. For example, a totally unexpected loss of interest in a well-established product may emerge. Management may not understand what caused it, but it should not be surprising that such things are possible. Novel features can, on the other hand, be extremely beneficial. They should not be suppressed because they were not anticipated.

  • In the world of safety, failures are unpredictable and undesirable. They emerge when a hidden tipping point is reached.
    As part of an Emergency Preparedness plan, recovery crews with well-defined roles are designated. Their job is to fix the system as quickly as possible and safely restore it to its previous stable state.
  • Serendipity is an unintended but highly desirable consequence. This implies an organization should have an Opportunity crew ready to activate. Their job is to explore the safety opportunity, discover new patterns which may lead to a new solution, and exploit their benefits.
    At a tactical level, the new solution may be a better way of achieving the Mission and Vision. In the same direction but a different path or route.
    At a strategic level, the huge implication is that new opportunity may lead to a better future state than the existing carefully crafted, well-intentioned one. Decision-makers are faced with a dilemma: do we stay the course or will we adapt and change our vector?
  • Avoid introducing novel safety initiatives as big events kicked off with a major announcement. These tend to breed cynicism especially if the company history includes past blemished efforts. Novelty means you honestly don’t know what the outcomes will be since it will be a new experience to those you know (identified stakeholders) and those you don’t know in the foggy network.
    Launch as a small experiment.
    If desirable consequences are observed, accelerate the impact by widening the scope.
    If unintended negative consequences emerge, quickly dampen the impact or even shut it down.
    As noted in (2), constructively de-stabilize the system in order to learn.

6. Because of the nonlinearity of the interactions, small causes can have large effects. The reverse is, of course, also true. The point is that the magnitude of the outcome is not only determined by the size of the cause, but also by the context and by the history of the system. This is another way of saying that we should be prepared for the unexpected. It also implies that we have to be very careful. Something we may think to be insignificant (a casual remark, a joke, a tone of voice) may change everything. Conversely, the grand five-year plan, the result of huge effort, may retrospectively turn out to be meaningless. This is not an argument against proper planning; we have to plan. The point is just that we cannot predict the outcome of a certain cause with absolute clarity.

  • The Butterfly effect is a phenomenon of a complex adaptive system. I’m sure many blog writers like myself are hoping that our safetydifferently cause will go viral, “cross the chasm”, and be adopted by the majority. Sonja in her blog refers to a small rudder that determines the direction of even the largest ship. Perhaps that’s what we are: trimtabs!
  • On the negative side, think of a time when an elected official or CEO made a casual remark about a safety disaster only to have it go viral and backfire. In 2010 Deep Horizon disaster then CEO Tony Hayward called the amount of oil and dispersant “relatively tiny” in comparison with the “very big ocean”.  Hayward’s involvement has left him a highly controversial public figure.
  • Question: Could a long-term safety plan to progress through the linear stages of a Safety Culture Maturity model be a candidate as a meaningless five-year plan?
    If a company conducts an employee early retirement or buy-out program, does it regress and fall down a stage or two?
    If a company deploys external contractors with high turnover, does it ever get off the bottom rung?
    Instead of a linear progression model, stay in the Present and listen to the stories internal and external workers are telling. With the safety Vision in mind, ask what can we do to hear more stories like these, fewer stories like those.
    As the stories change, so will the safety culture.  Proper planning is launching small experiments to shape the culture.

7. Complex organizations cannot thrive when there is too much central control. This certainly does not imply that there should be no control, but rather that control should be distributed throughout the system. One should not go overboard with the notions of self-organization and distributed control. This can be an excuse not to accept the responsibility for decisions when firm decisions are demanded by the context. A good example here is the fact that managers are often keen to “distribute” the responsibility when there are unpopular decisions to be made—like retrenchments—but keen to centralize decisions when they are popular.

  • I’ve noticed safety professionals are frequent candidates for organization pendulum swings. One day you’re in Corporate Safety. Then an accident occurs and in the ensuing investigation a recommendation is made to move you into the field to be closer to the action. Later a new Director of Safety is appointed and she chooses to centralize Safety.
    Pendulum swings are what Robert Fritz calls Corporate Tides, the natural ebb and flow of org structure evolution.
  • Central v distributed control changes are more about governance/audit rather than workflow purposes. No matter what control mechanism is in vogue, it should enable stigmergic behaviour, the natural forming of network clusters to share knowledge, processes, and practices.
  • In a complex adaptive system, each worker is an autonomous decision-maker, a solution not a problem. Decisions made are based on information at hand (aka tacit knowledge) and if not available, knowing who, where, how to access it. Every worker has a knowledge cluster in the network. A safety professional positioned in the field can mean quicker access but more importantly, stronger in-person interactions. This doesn’t discount a person in Head Office who has a trusting relationship from being a “go to” guy. Today’s video conferencing tools can place the Corp Safety person virtually on site in a matter of minutes.
Thanks, Sonja. Thanks, Paul.
Note: If you have any comments, I would appreciate if you would post them at safetydifferently.com.

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSaveSaveSave

SaveSaveSaveSave

SaveSave

Safety in the Age of Cognitive Complexity

ReductionismDuring engineering school in the late 1960s I was taught to ignore friction as a force and use first order approximate linear models. When measuring distance, ignore the curvature of earth and treat as a straight line. Break things down into its parts, analyze each component, fix it, and then put it all back together. In the 1990s another paradigm coined Systems Thinking came about we jumped from Taylorism to embrace the Fifth Discipline, social-technical systems, Business Process Reengineering. When human issues arose, we bolted on Change Management to support huge advances in information technology. All industries have benefited and been disrupted by business and technological breakthroughs. Safety as an industry is no exception.

In January 2000, Stephen Hawking stated: “I think the next century will be the century of complexity.” In the spirit of safety differently, let’s explore safety from a non-linear complexity science perspective. Safety is no longer viewed as a static product or a service but as an emergent property in a complex adaptive system (CAS). Emergence is a real world phenomenon that System Thinking does not address or perhaps, chooses to ignore to keep matters simple. Glenda Eoyang defines a CAS as “a collection of individual agents who have the freedom to act in unpredictable ways, and whose actions are interconnected such that one agent’s actions changes the context for other agents.”

Let’s be clear. I’m not advocating abandoning safety rules, regulations, hierarchy of controls, checklists, etc. and letting workers go wild. We just need to treat them as boundaries and constraints that either enable or prevent safety as a CAS property from emerging. By repurposing them in our mind, we can better see why rules prevent danger from emerging and why too many constraining rules might create the conditions for danger such as confusion, distraction, anger to emerge. As overloading increases, a tipping point is reached, the apex of a non-linear parabola curve. A phase transition occurs and a surprise emerges, typically the failure of a brittle agent.
As the eras of business have evolved from scientific management to systems thinking, so has safety in parallel. The graphic below is a modification of an Erik Hollnagel slide presented at the 2012 Resilience Learning Lab in Vancouver and extends beyond to an Age of Cognitive Complexity.

[Click to enlarge]

In this age of cognitive complexity, the whole is greater than the sum of its parts (aka agents).

  1. “Different is more” which means the greater the diversity of agents, the greater the distributed cognition. Think wisdom of crowds, crowdfunding, crowdsourcing.
  2. “More is different” which means when you put the pieces of a complex system together you get behavior that is only understandable and explainable by understanding how the pieces work in concert (see Ron Gantt’s enlightening posting). In a CAS, doing the same thing over and over again can lead a different result.
  3. ”Different is order within unorder” which means in a complex environment full of confusion and unpredictability, order can be found in the form of hidden patterns. Think of a meeting agenda that shapes the orderly flow of discussion and contributions of individuals in a meeting. In nature, think of fractals that can be found everywhere.

When working in a Newtonian-Cartesian linear system, you can craft an idealistic Future state and develop a safety plan to get there. However, in a CAS, predicting the future is essentially a waste of time. The key is to make sense of the current conditions and focus on the evolutionary potential of the Present.

Is the shift to complexity-based safety thinking sufficient to warrant a new label? Dare we call this different paradigm Safety-III? It can be a container for the application of cognition and complexity concepts and language to safety: Adaptive safety, Abductive safety reasoning, Exaptive safety innovation, Viral safety communication to build trust, Autopoietic SMS, Dialogic safety investigation, Heuristics in safety assessment, Self-organizing role-based crew structures, Strange attractors as safety values, Cognitive activation using sensemaking safety rituals, Feedback loops in safety best practices, Brittleness in life saving rules, Swarm intelligent emergency response, Human sensor networks, Narrative safe-to-fail experiments, Attitude real-time monitoring as a safety lead indicator, Cynefin safety dynamics. Over time I’d like to open up an exploratory dialogue on some of these on the safetydifferently.com website.

From a review of past and current publications, I sense compelling support for a complexity-based safety approach. Here are a few on my list (my personal thoughts are the bulleted points not in quotes.)

System Safety Engineering: Back to the Future (Nancy Leveson 2002)

  • ‘Safety is clearly an emergent property of systems.’
  • ‘It is not possible to take a single system component, like a software module, in isolation and assess its safety. A component that is perfectly safe in one system may not be when used in another.’

The Complexity of Failure (Sidney Dekker, Paul Cilliers, Jan-Hendrik Hofmeyr 2011)

  • ‘When accidents are seen as complex phenomena, there is no longer an obvious relationship between the behavior of parts in the system (or their malfunctioning, e.g. ‘‘human errors’’) and system-level outcomes.’
  • ‘Investigations that embrace complexity, then, might stop looking for the ‘‘causes’’ of failure or success. Instead, they gather multiple narratives from different perspectives inside of the complex system, which give partially overlapping and partially contradictory accounts of how emergent outcomes come about.
  • ‘The complexity perspective dispenses with the notion that there are easy answers to a complex systems event—supposedly within reach of the one with the best method or most objective investigative viewpoint. It allows us to invite more voices into the conver- sation, and to celebrate their diversity and contributions.’

Drift into Failure (Sidney Dekker 2011)

  • By taking complexity theory ideas like the butterfly effect, unruly technology, tipping points, diversity, we can understand that failure emerges opportunistically, non-randomly, from the very webs of relationships that breed success and that are supposed to protect organizations from disaster.
  • ‘Safety is an emergent property, and its erosion is not about the breakage or lack of quality of single components.’
  • ‘Drifting into failure is not so much about breakdowns or malfunctioning of components, as it is about an organization not adapting effectively to cope with the complexity of its own structure and environment.’

Safety-I and Safety-II (Erik Hollnagel 2014)

  • ‘Karl Weick in a 1987 California Management Review article introduced the idea of reliability as a dynamic non-event. This has often been paraphrased to define safety as a ‘dynamic non-event’…even though it may be a slight misinterpretation.’
  • ‘Safety-I defines safety as a condition where the number of adverse outcomes (accidents/incidents/near misses) is a low as possible.’
  • When there is an absence of an adverse outcome, it becomes a non-event which people take for granted. When people see nothing, they presume that nothing is happening and that nothing will continue to happen if they continue to act as before.
  • ‘Safety-II is defined as a condition where as much as possible goes right.’
  • ‘In Safety-II the absence of failures is a result of active engagement. This is not safety as a non-event but safety as something that happens. Because it is something that happens, it can be observed, measured, and managed.’
  • Safety-III is observing, measuring, maybe managing but definitely influencing changes in the conditions that enables safety to happen in a CAS. In addition, it’s active engagement in observing, measuring, maybe managing but definitely influencing changes in the complex conditions that prevent danger from emerging.

Todd Conklin PAPod 28 with Ivan Pupulidy (July 2015)

  • 18:29 ‘If we could start an emerging dialogue amongst our workers around the topic of conditions, we accelerate the learning in a really unique way.’
  • ‘We started off with Safety-I, Safety-II. That was our original model. What we really recognized rather quickly was that there was a Safety-III.’
  • ‘Safety-III was developing this concept of expertise around recognition of changes in the environment or changes in the conditions.’

Peter Sandman (Dave Johnson article, ISHN Magazine Oct 2015)

  • ‘The fast pace of change in business today arguably requires a taste for chaos and an ability to cope well with high uncertainty.’
  • ‘What is called ‘innovation’ may actually be just coping with an ever-faster pace of change: anticipating the changes on the horizon and adapting promptly to the changes that are already occurring. This sort of flexible adaptability is genuinely antithetical to orderly, rule-governed, stable behavioral protocols.’
  • ‘Safety has traditionally been prone to orderly, rule-governed, stable behavioral protocols. For the sorts of organizational cultures that succeed in today’s environment, we may need to create a more flexible, adaptable, innovative, chaos-tolerating approach to safety.’
  • ‘With the advent of ‘big data,’ business today is ever-more analytic. I don’t know whether it’s true that safety people tend to be intuitive/empathic – but if that’s the case, then safety people may be increasingly out of step. And safety may need to evolve in a more analytic direction. That needn’t mean caring less about others, of course – just using a different skill set to understand why others are the way they are.’
  • I suggest that different skill set will be based on a complexity-based safety approach.

Resilience Engineering Concepts and Precepts (Hollnagel, Woods & Leveson, 2006)

  • Four essential abilities that a system or an organisation must have: Respond, Anticipate, Monitor, Learn. Below is how I see the fit with complexity principles.
  • We respond by focusing on the Present. Typically it’s an action to quickly recover from a failure. However, it can also be seizing opportunity that serendipitously emerged. Carpe Diem. Because you can’t predict the future in a CAS, having a ready-made set of emergency responses won’t help if unknowable and unimaginable Black Swans occur. Heuristics and complex swarming strategies are required to cope with the actual.
  • We anticipate by raising our awareness of potential tipping points. We don’t predict the future but practice spotting weak signals of emerging trends as early as we can. Our acute alertness may sense we’re in the zone of complacency and need to pull back the operating point.
  • We learn by making sense of the present and adapting to co-evolve the system. We call out Safety-I myths and fallacies proved to be in error by facts based on complexity science. We realize the importance of praxis (co-evolving theory with practice).
  • We monitor the emergence of safety or danger as adjustments are made to varying conditions. We monitor the margins of maneuver and whether “cast in stone” routines and habits are building resilience or increasing brittleness.

What publications am I missing that support or argue against a complexity-based safety approach? Will calling it Safety-III help to highlight the change or risk it being quickly dismissed as a safety fad?

If you’d like to share your thoughts and comments, I suggest entering them at the safety differently.com website.

When thinking of Safety, think of coffee aroma

CoffeeSafety has always been a hard sell to management and to front-line workers because, as Karl Weick put forward, Safety is a dynamic non-event. Non-events are taken for granted. When people see nothing, they presume that nothing is happening and that nothing will continue to happen if they continue to act as before.

I’m now looking at Safety from a complexity science perspective as something that emerges when system agents interact. An example is aroma emerging when hot water interacts with dry coffee grinds. Emergence is a real world phenomenon that System Thinking does not address.

Safety-I and Safety-II do not create safety but provide the conditions for Safety to dynamically emerge. But as a non-event, it’s invisible and people see nothing. Just as safety can emerge, so can danger as an invisible non-event. What we see is failure (e.g., accident, injury, fatality) when the tipping point is reached. We can also reach a tipping point when we do much of a good thing. Safety rules are valuable but if a worker is overwhelmed by too many, danger in terms of confusion, distraction can emerge.

I see great promise in advancing the Safety-II paradigm to understand what are the right things people should be doing under varying conditions to enable safety to emerge.

For further insights into Safety-II, I suggest reading Steven Shorrock’s posting What Safety-II isn’t on Safetydifferently.com. Below are my additional comments under each point made by Steven with a tie to complexity science. Thanks, Steven.

Safety-II isn’t about looking only at success or the positive
Looking at the whole distribution and all possible outcomes means recognizing there is a linear Gaussian and a non-linear Pareto world. The latter is where Black Swans and natural disasters unexpectedly emerge.

Safety-II isn’t a fad
Not all Safety-I foundations are based on science. As Fred Manuelle has proven, Heinrich’s Law is a myth. John Burnham’s book Accident Prone offers a historical rise and fall of the accident proneness concept. We could call them fads but it’s difficult to since they have been blindly accepted for so long.

This year marks the 30th anniversary of the Santa Fe Institute where Complexity science was born. At the May 2012 Resilience Lab I attended, Erik Hollnagel and Richard Cook introduced the RMLA elements of Resilience engineering: Respond, Monitor, Learn, Anticipate. They fit with Cognitive-Edge’s complexity view of Resilience: Fast recovery (R), Rapid exploitation (M,L), Early detection (A). This alignment had led to one way to operationalize Safety-II.

Safety-II isn’t ‘just theory’
As a pragmatist, I tend to not use the word “theory” in my conversations. Praxis is more important to me instead of spewing theoretical ideas. When dealing with complexity, the traditional Scientific Method doesn’t work. It’s not deductive nor inductive reasoning but abductive. This is the logic of hunches based on past experiences  and making sense of the real world.

Safety-II isn’t the end of Safety-I
The focus of Safety-I is on robust rules, processes, systems, equipment, materials, etc. to prevent a failure from occurring. Nothing wrong with that. Safety-II asks what can we do to recover when failure does occur plus what can we do to anticipate when failure might happen.

Resilience can be more than just bouncing back. Why return to the same place only to be hit again? Early exploitation means finding a better place to bounce to. We call it “swarming” or Serendipity if an opportunity unexpectedly arises.

Safety-II isn’t about ‘best practice’
“Best” practice does exist but only in the Obvious domain of the Cynefin Framework. It’s the domain of intuition and the Thinking Fast in Daniel Kahneman’s book Thinking Fast and Slow. What’s the caveat with best practices? There’s no feedback loop. So people just carry on as they did before.  Some best practices become good habits. On the other hand, danger can emerge from the baddies and one will drift into failure.

Safety-II and Resilience is about catching yourself before drifting into failure. Being alert to detect weak signals (e.g., surprising behaviours, strange noises, unsettling rumours) and having physical systems and people networks in place to trigger anticipatory awareness.

Safety-II isn’t what ‘we already do’
“Oh, yes, we already do that!” is typically expressed by an expert. It might be a company’s line manager or a safety professional. There’s minimal value challenging the response.  You could execute an “expert entrainment breaking” strategy. The preferred alternative? Follow what John Kay describes in his book Obliquity: Why Our Goals are Best Achieved Indirectly.

Don’t even start by saying “Safety-II”. Begin by gathering stories and making sense of how things get done and why things are done a particular way. Note the stories about doing things the right way. Chances are pretty high most stories will be around Safety-I. There’s your data, your evidence that either validates or disproves “we already do”. Tough for an expert to refute.

Safety-II isn’t ‘them and us’
It’s not them/us, nor either/or, but both/and.  Safety-I+Safety-II. It’s Robustness + Resilience together.  We want to analyze all of the data available, when things go wrong and when things go right.

The evolution of safety can be characterized by a series of overlapping life cycle paradigms. The first paradigm was Scientific Management followed by the rise of Systems Thinking in the 1980s. Today Cognition & Complexity are at the forefront. By honouring the Past, we learn in the Present. We keep the best things from the previous paradigms and let go of the proven myths and fallacies.

Safety-II isn’t just about safety
Drinking a cup of coffee should be a total experience, not just tasting of the liquid. It includes smelling the aroma, seeing the Barista’s carefully crafted cream design, hearing the first slurp (okay, I confess.) Safety should also be a total experience.

Safety can emerge from efficient as well as effective conditions.  Experienced workers know that a well-oiled, smoothly running machine is low risk and safe. However, they constantly monitor by watching gauges, listening for strange noises, and so on. These are efficient conditions – known minimums, maximums, and optimums that enable safety to emerge. We do things right.

When conditions involve unknowns, unknowables, and unimaginables, the shift is to effectiveness. We do the right things. But what are these right things?

It’s about being in the emerging Present and not worrying about some distant idealistic Future. It’s about engaging the entire workforce (i.e., wisdom of crowds) so no hard selling or buying-in is necessary.  It’s about introducing catalysts to reveal new work patterns.  It’s about conducting small “safe-to-fail” experiments to  shift the safety culture. It’s about the quick implementation of safety solutions that people want now.

Signing off and heading to Starbucks.

Apple buying Beats might be a safe-to-fail experiment

The music industry is a complex adaptive system (CAS). The industry is full of autonomous agents who have good and bad relationships with each other. Behaviours and reactive consequences can build on each other.  Media writers and industry analysts are also agents who are  easily attracted to big events. Their comments and opinions add to the pile and fuel momentum. However the momentum is nonlinear. Interest in the  topic will eventually fall off  as pundits tire and move on or a feverish pitch continues. Alternatively a CAS phenomenon called tipping point occurs. The music industry then changes. It might be small or a huge paradigm shift. It can’t be predicted; it will just emerge . In complexity jargon, the the system doesn’t evolve but co-evolves.  It’s asymmetrical – in other words, there is no reset or UNDO button to go back prior to the event.

While I might have an opinion about Apple buying Beats, I’m more interested in observing music industry behaviour. Here’s one perspective. I’ll use complexity language and apply the Cynefin Framework.

1. Apple is applying Abductive thinking and playing a hunch.

“Let’s buy Beats because the deal might open up some cool serendipitous opportunities. We can also generate some free publicity and let others promote us, and have fun keeping people guessing.  Yeh, it may be a downer if they write we’re nuts. But on the upside they are helping us by driving the competition crazy.”

2. Apple is probing the music industry by conducting a safe-to-fail experiment.

“It’s only $3.2B so we can use some loose change in our pockets. Beats is pulling in $1B annual revenue so really it’s no big big risk.”

3. Apple will monitor agent behaviour and observe what emerges.

“Let’s see what the media guys say.
“Let’s read about researchers guessing what we’re doing.”
“Let’s watch the business analysts  tear their hair out trying to figure out a business case  with a positive NPV. Hah! If they only knew a business case is folly in the Complex domain since predictability is impossible. That’s why we’re playing a hunch which may or may not be another game changer for us.”

4. If the Apple/Beats deal starts going sour, dampen or shut down the experiment.

“Let’s have our people on alert to detect unintended negative consequences. We can dampen the impact by introducing new information and watch the response. If we feel it’s not worth saving, we’ll cut our losses. The benefits gained will be what we learn from the experiment.”

5. If the Apple/Beats deal takes off, accelerate and search for new behaviour patterns to exploit.

“The key agents in the CAS to watch are the consumers. Observing what they buy is easy.  What’s more important is monitoring what they don’t buy.  We want to discover where they are heading and what the is strange attractor. It might be how consumers like to stream music, how they like to listen to music (why only ears?), or simply cool headphones are fashion statements.”

6. Build product/service solutions that  exploit this new pattern opportunity.

“Once we discover and understand the new consumer want, be prepared to move quickly.  Let’s ensure our iTunes Radio people are in the loop as well as the AppleTV and iWatch gangs. Marketing should be ready to use the Freemium business model. We’ll offer the new music  service for free to create barriers of entry to block competitors  who can’t afford to play the new game. It will be similar to our free medical/safety alert service we’ll offer with the iWatch. Free for Basic and then hook ’em with the gotta-have Premium.”

7. Move from the Complex domain to the Complicated Domain to establish order and stability.

“As soon as we’re pretty certain our Betas are viable, we’ll put our engineering  and marketing teams on it to release Version 1. We’ll also start thinking about Version 2. As before, we’ll dispense with ineffective external consumer focus groups. We’ll give every employee the product/service and gather narrative (i.e., stories) about their experiences. After all, employees are consumers and if it’s not great for us, then it won’t be great for the public.

Besides learning from ourselves, let’s use our Human Sensor network to cast  a wide net on emerging new technologies and ideas. Who knows, we might find another Beats out there we can buy to get Version 2 earlier to market.”

Fantasy? Fiction? The outcomes may be guesses but the Probe, Sense, Respond process in the Cynefin Complex Domain isn’t.

 

When a disaster happens, look for the positive

In last month’s blog I discussed Fast Recovery and Swarming as 2 strategies to exit the Chaotic Domain. These are appropriate when looking for a “fast answer”. A 3rd strategy is asking a “slow question.”

Resilience as Cynefin DynamicsWhile the process flow through the Cynefin Framework is similar to Swarming (Strategy B), the key difference is not looking for a quick solution but attempting to understand the behaviour of agents (humans, machines, events, ideas). The focus is on identifying something positive emerging from the disaster, a serendipitous opportunity worth exploiting.

By conducting safe-to-fail experiments, we can probe the system, monitor agent behaviour, and discover emerging patterns that may lead to improvements in culture, system, process, structure.

Occasions can arise when abductive thinking could yield a positive result. In this type of reasoning, we begin with some commonly well known facts that are already accepted and then works towards an explanation. The vernacular would be playing a hunch.

Snowstorm Repairs

In the electric utility business when the “lights go out”, a trouble crew  is mobilized and the emergency restoration process begins. Smart crews are also on the lookout for serendipitous opportunities. One case involved a winter windstorm causing  a tree branch to fall across the live wires. Upon restoration, the crew leader took it upon himself to contact customers affected by the outage to discuss removal of other potentially hazardous branches. The customers were very willing and approved the trimming. The serendipity arose because these very same customers vehemently resisted in the Fall to have their trees trimmed as part of the routine vegetation maintenance program.  The perception held then was that the trees were in full bloom and aesthetically pleasing; the clearance issues were of no concern. Being out of power for a period of time in the cold winter can shift paradigms.

Flipping the Classroom is a Safe-to-Fail Experiment

Teaching in a classroom is a complex issue. Some teachers try to maintain a stable, predictable environment – train by rote, test, retest, pass anyway to keep in the age category. Sir Ken Robinson calls this teaching using the factory model. What are the alternatives? One that’s catching on is Flipping the Classroom.

Pasted Graphic 2

As shown above, universities like Penn State are trying it out. More and more teachers at the high school level are giving it a go.

On YouTube, an 8th grade math and Algebra 1 teacher at a public school explains why she flipped her classroom in 2011.

Recently the Vancouver Sun newspaper carried the heading “Flipped classrooms create magic and controversy in schools.” Carolyn Durley “flipped” her Biology 12 class at Okanagan Mission secondary in Kelowna. On the other hand, David Wees, a math and science teacher at Stratford Hall in Vancouver, isn’t a fan of the flipped classroom because of the potential for homework overload but said he supports any strategy that reduces teacher-led instruction.

This blog isn’t about the pros and cons, why you should and why you shouldn’t. It’s not about lambasting lazy teachers, lazy students, lazy administrators, lazy parents, lazy school trustees. Or perhaps, it’s not laziness but a matter of just too busy working on other life priorities; it’s easier to stay conditioned with the factory model.

This blog is about how do make change in a system steeped in tradition and culture. It starts with somebody not happy. In Durley’s case, she began to wonder if she was serving her pupils well as “the sage on the stage,” given the wealth of information available online and the growing expectation that schools in the 21st century will do more than deliver content and help students prepare for standardized tests. It might be educational leaders desiring a better solution to achieve the real goal: putting students in the centre. Or it could be a school trustee frustrated with the present money-sucking black hole and wanting more bang for the buck. In Change Management terminology, we call this the “emotional need for change”. Without it, the status quo will continue.

So our heart is now pumping and our head is demanding something needs to be done. Where do we start? Sadly, we often start off on the wrong foot. How many times have we seen this: A splashy announcement is broadcasted stating the formation of an astute task force mandated to launch a study. In 6 months they will report back, confident that the “problem” will be fixed. Alas, with the passing of time more urgent problems have arisen so the report is shelved indefinitely. The problem hasn’t gone away, it’s just been put on the back burner.

If the preceding was the wrong foot, what’s the right foot? It begins with the correct approach, recognizing that the teaching is a Complex Domain issue. The aim is to try to make sense of what’s happening and searching for behavioural patterns that can lead us to new solutions. We are on the Unordered Side of the Cynefin Framework where we observe to understand and learn, not the Ordered Side where we fix, repair, or get back on track.

Flipping the classroom is Safe-Fail Experiment
Conducting a Safe-Fail Experiment involves probing the system, sensing the behaviour changes, responding by either accelerating the positives or dampening the negatives. We will even shut it down if it is becoming dangerous to health and wellness.

What Southridge school in Surrey did last year is a great example. They gave the flipped classroom a test run and invited three experts to the school in late August to train all of the senior school teachers.

Be aware that Safe-Fail Experiment is different than a pilot project in the Complicated domain. Pilots are typically the second step in a larger implementation. The first step was developing a business case prepared by an expert analyst. In some circumstances, money may have been spent on a feasibility study prior to the more detailed business case. A decision is made and approval to proceed granted. The pilot project’s role to pick a test area, find oversights and make refinements before rolling it out to the masses. Once launched, very rarely are implementations stopped. After all, who wants to look bad and confess they made a mistake. Nope, once the track is put down and the train leaves the station, the project manager’s goal is to stay on track and get there on time, on budget.

This isn’t to say a Safe-Fail Experiment or Probe is an off-the-cuff effort. On the contrary, the discipline of project management is just as important. There is a leader and a team who follow a prescribed method to put the experiment into motion, monitor the impact on a regular basis, assess whether positive outcomes should be increased and negatives should be reduced. Like a project, it has a start and an end point. The Safe-Fail Experiment usually run 3 to 6 months, ample time to sense how the system behaved and what patterns emerged. And herein lies the big difference. It’s the patterns that potentially lead us to new solutions. It may or may not be the actual probe put into motion. You may discover serendipitously something else significantly better. In fact, a radical, provocative, crazy probe is a more effective simulate to jolt the system.

Flipping the Classroom is a radical probe and is gaining traction. But it may not be the final destination. “This is an exciting time in education because technology is finally pushing people and organizations to change the way they do things,” said Cameron Seib, Southridge school’s math curriculum leader.

Another Safe-to-Fail Experiment: New Starbucks Store in Denver

If I say “let’s get a coffee at Starbucks”, what images start appearing in your mind? It’s a high probably it’s one of these pictures below. This is the brand that Starttarbucks has successfully created and the paradigm they have imbedded inside our brain. It’s how we perceive what a Starbucks store looks like, a traditional store with large square footage. And we feel very comfortable being in one. 

imgres imgres imgres imgres

While we as customers feel comfortable, Starbucks understands it can’t be otherwise it will start heading down the negative slope of the S-Curve business life cycle. There are only so many large coffee houses you can build and expect profitability.They need to constantly think differently and explore other ways of meeting their mission statement: “To inspire and nurture the human spirit – one person, one cup and one neighborhood at a time.”

Understanding why people will visit a coffee house is a Complex Domain issue. Type of drinks offered, food served, price, location, convenience, ambience, social atmosphere, free amenities like WiFi and iTunes downloads are all factors that go into making up the Starbucks brand. These needs are well established. However, are there other consumer needs that a Starbucks store can satisfy? What about Art to emotionally touch the heart? As customers become more energy-saving conscious, what about a coffee house that has a small carbon footprint and is environmentally green? Really good questions that need answering.

Decision time. You can choose to analyze and build a “fail-safe” business case filled with forecasts, assumptions, and mitigations for every known risk. Then run it up the corporate ladder and negotiate with the strategy, marketing, finance, operations people to support it.

Safe-fail experiment

Alternatively, you can pose a “what-if” Value Proposition and conduct a Safe-Fail experiment. Try something small, monitor how people respond, learn what works and amplify, learn what doesn’t work and dampen, and be on the lookout for better opportunities that serendipitously emerge.
Smart business people will do the latter because it’s relatively better, cheaper and faster to execute. You avoid ego problems that come with owning an idea and defending it when it goes awry. The initial idea is simply a starting point. It’s just a probe, a drop of water into a pool to watch the ripple patterns and see where they go. If the idea catches on and even goes viral, then you have discovered a new solution. Now move back into the Complicated Domain to install the processes, systems, and structures.

This week in Colorado, Starbucks opened a store unlike any before it.
1670889-slide-dsc-6747 1670889-slide-dsc-6780 1670889-slide-dsc-8771

There are no leather chairs or free power outlets. In fact, there’s no space for the customer at all. Starbucks has reimagined the coffee hut as a “modern modular,” LEED-certified drive-thru and walk-up shop. The building was constructed in a factory and delivered from a truck, but its facade is clad in gorgeous old Wyoming snow fencing.
At a mere 500 square feet, they have just enough space to squeeze in three to five employees along with all of the coffee making apparatuses necessary to execute a full Starbucks menu.

Their new building paradigm is a confluence of all these various impulses: the environment, localism, market growth, low-cost, low-risk expandability. While officially labelled a “pilot program”, it sounds to me it’s still in the Safe-Fail experiment stage. Conceivably the local Denver folks might totally avoid it and the store will fail in terms of profitability. No big deal. You respond by closing the store, picking it up, re-imaging, and trying another location. Consider the money spent as an investment in learning and anticipating the future.

This blog was inspired by the design folks at Fast Company. To read their awesome article in full and learn more about Starbucks, click here.