Tag Archives: Obliquity

Chucking SVA out of my Strategy toolbox

Shareholder Value Added (SVA): the singular goal of a company should be to maximize the return to shareholders. It was the big idea starting in the late 1970s and I was exposed to it during my MBA years. As bright eyed, bushy tailed (or was it bushy-eyed, bright tailed?) students we consumed the defining article  “Theory of the Firm: Managerial Behavior, Agency Costs and Ownership Structure.”

As a young manager rising in the ranks, I learned my job was to not selfishly optimize benefits for myself and my staff but for the owners of the firm. I must admit I did have some troubles rationalizing the idea at the time. As the owner of a few common stocks, the concept resonated with me. Imagine that, employees working solely on my behalf. But the other thought in the back of my mind was that I was ready to dump the stock if something better came along or if I needed the cash. Employee loyalty, nice. My shareholder loyalty, uhh.

For the next 20 years up to his retirement in 2001, Jack Welch played the SVA theory to grow GE in the most valuable and largest company in the world. Others followed the form and function. Who was I to argue? Put the discomfort aside.

What emerged was the widespread use of stock options as executive compensation tools.  This led to execs concentrating on increasing stock price and Wall Street pushing for short-term myopic returns to satisfy shareholders. We also experienced questionable practices such as outsourcing to lower costs followed by using the savings to repurchase stock to drive up the price. And then there was out and out bad behaviour with C-suite manipulation of accounting numbers (e.g., Enron). The discomfort didn’t go away; it got worse.

In 2009 the discomfort loop was closed. Jack Welch stated “On the face of it, shareholder value is the dumbest idea in the world. Shareholder value is a result, not a strategy… your main constituencies are your employees, your customers and your products. Managers and investors should not set share price increases as their overarching goal. … Short-term profits should be allied with an increase in the long-term value of a company.” Begone SVA.

If SVA is now out, what’s taken its place in my toolbox?

S-curve evolution of Business

I favour John Kay’s Obliquity idea that firms should focus on Purpose not on profit or growth, market share to increase profits. Profit is still important but how to go about achieving it is different. You don’t establish a profit or market share goal and shoot for it as a target. Instead you set your sites on something better (like purpose); profit and market share are natural outcomes.

In a similar fashion, you don’t set Happiness as a personal goal. Happiness emerges as you strive to achieve a greater cause or contribution.

How does this fit with complexity thinking? Developing a safe-to-fail experiment in the Cynefin Complex Domain is obliquity. It’s an indirect approach to discovering patterns that can lead us to new ways and solutions to satisfy customers, employees, and yes, shareholders.

And now for something completely different

Screen Shot 2015-01-17 at 11.44.50 AM

Fans of Monty Python will quickly recognize this blog’s title. Avid followers will remember the Fish Slapping Dance.


Well, the time has come when many of us feel the need to slap traditional safety practices on the side of the head with a fish. If you agree, check out safetydifferently.com. What it’s about:

Safety differently. The words seem contradictory next to each other. When facing difficulties with a potential disastrous loss, it makes more sense to wish for more of what is already known. Safety should come from well-established ways of doing things, removed far from where control may be lost.

But as workplaces and systems become increasingly interactive, more tightly coupled, subject to a fast pace of technological change and tremendous economic pressures to be innovative and cutting-edge, such “well-known” places are increasingly rare. If they exist at all.

There is a growing need to do safety differently.

People and organisations need to adapt. Traditional approaches to safety are not well suited for this. They are, to a large extent built on linear ideas – ideas about tighter control of work and processes, of removing creativity and autonomy, of telling people to comply.

The purpose of this website is to celebrate new and different approaches to safety. More specifically it is about exploring approaches that boost the capacity of people and organisations to handle their activities successfully, under varying conditions. Some of the analytical leverage will come from theoretical advancements in fields such as adaptive safety, resilience engineering, complexity theory, and human factors. But the ambition is to stay close to pragmatic advancements that actually deliver safety differently, in action.

Also, bashing safety initiatives that are based on constraining people’s capacities will be a frequent ingredient.

I had the opportunity to share my thoughts and ideas on what  “safety differently” is. I have expanded on these and added a few reference links.

Safety differently is not blindly following a stepping stone path but taking the time to turn over each stone and challenging why is the stone here in the first place, what was the intent, is it still valid and useful.

During engineering school I was taught to ignore friction as a force and use first order approximate linear models. When measuring distance, ignore the curvature of earth and treat as a straight line.

Linear Reductionist SafetyMuch of this thinking has translated into safety. Examples include Heinrich’s safety triangle, Domino theory, and Reason’s Swiss Cheese model. Safety differently is acknowledging that the real world is non-linear, Pareto distributed. It’s understanding how complexity in terms of diversity, coherence, tipping points, strange attractors, emergence, Black Swans impacts safety practices. Safety differently is no longer seeing safety as a product or a service but as an emergent property in a complex adaptive system.

BigHistoryProject GoldilocksWe can learn a lot from the amazing Big History Project. Dave Christian talks about the Goldilocks conditions that lead to increased complexity. Safety differently looks at what are these “just right” conditions that allow safety to emerge.

Because safety is an emergent property, that means we can’t create, control, nor manage a safety culture. Safety differently researches modulators that might influence and shape a present culture. Modulators are different that Drivers. Drivers are cause & effect oriented (if we do this, we get that) whereas Modulators are catalysts we deliberately place into a system but cannot predict what will happen. It sounds a bit crazy but it’s done all the time. Think of a restaurant experimenting with a new menu item. The chef doesn’t actually know if it will be a success. Even if it fails, he will learn more about clientele food preferences. We call these safe-to-fail experiments where the knowledge gained is more than the cost of the probe.

Cynefin Framework
Cynefin Framework: Follow the Green line to Best Practices

Safety differently views best practices as a space where ideas go to die. It’s the final end point of a linear line. Why die? Because there is no feedback loop that allows change to occur. Why no feedback loop? Because by definition it’s best practices and no improvement is necessary! Unfortunately, a Goldilocks condition arises we call complacency and the phenomenon is a drift into failure.

Safety differently means using different strategies when navigating complexity. A good example is the Saudi culture that Ron mentioned re taking a roundabout method to get to a destination. This is a complexity strategy than John Kay wrote about in this book Obliquity [Click here to see his TEDx talk].


Safety differently means deploying abductive rather than traditional deductive and inductive reasoning.

Safety differently means adaptation + exaptation as well as robustness + resilience.

Heinrich Revisited

Safety differently is not dismissing the past. We honour it by keeping practices that continue to serve well. But we will abandon approaches, methods, and tools that have been now proven to be false, myths, fallacies by complexity science, cognitive science, and other holistic research.


Accident Prone

In the early 1900s, humans were considered accident prone; Management took actions to keep people from mishandling machines and upsetting efficiency. Today, we know differently: The purpose of machines is to augment human intelligence and release people from non-thinking labour roles.

When thinking of Safety, think of coffee aroma

CoffeeSafety has always been a hard sell to management and to front-line workers because, as Karl Weick put forward, Safety is a dynamic non-event. Non-events are taken for granted. When people see nothing, they presume that nothing is happening and that nothing will continue to happen if they continue to act as before.

I’m now looking at Safety from a complexity science perspective as something that emerges when system agents interact. An example is aroma emerging when hot water interacts with dry coffee grinds. Emergence is a real world phenomenon that System Thinking does not address.

Safety-I and Safety-II do not create safety but provide the conditions for Safety to dynamically emerge. But as a non-event, it’s invisible and people see nothing. Just as safety can emerge, so can danger as an invisible non-event. What we see is failure (e.g., accident, injury, fatality) when the tipping point is reached. We can also reach a tipping point when we do much of a good thing. Safety rules are valuable but if a worker is overwhelmed by too many, danger in terms of confusion, distraction can emerge.

I see great promise in advancing the Safety-II paradigm to understand what are the right things people should be doing under varying conditions to enable safety to emerge.

For further insights into Safety-II, I suggest reading Steven Shorrock’s posting What Safety-II isn’t on Safetydifferently.com. Below are my additional comments under each point made by Steven with a tie to complexity science. Thanks, Steven.

Safety-II isn’t about looking only at success or the positive
Looking at the whole distribution and all possible outcomes means recognizing there is a linear Gaussian and a non-linear Pareto world. The latter is where Black Swans and natural disasters unexpectedly emerge.

Safety-II isn’t a fad
Not all Safety-I foundations are based on science. As Fred Manuelle has proven, Heinrich’s Law is a myth. John Burnham’s book Accident Prone offers a historical rise and fall of the accident proneness concept. We could call them fads but it’s difficult to since they have been blindly accepted for so long.

This year marks the 30th anniversary of the Santa Fe Institute where Complexity science was born. At the May 2012 Resilience Lab I attended, Erik Hollnagel and Richard Cook introduced the RMLA elements of Resilience engineering: Respond, Monitor, Learn, Anticipate. They fit with Cognitive-Edge’s complexity view of Resilience: Fast recovery (R), Rapid exploitation (M,L), Early detection (A). This alignment had led to one way to operationalize Safety-II.

Safety-II isn’t ‘just theory’
As a pragmatist, I tend to not use the word “theory” in my conversations. Praxis is more important to me instead of spewing theoretical ideas. When dealing with complexity, the traditional Scientific Method doesn’t work. It’s not deductive nor inductive reasoning but abductive. This is the logic of hunches based on past experiences  and making sense of the real world.

Safety-II isn’t the end of Safety-I
The focus of Safety-I is on robust rules, processes, systems, equipment, materials, etc. to prevent a failure from occurring. Nothing wrong with that. Safety-II asks what can we do to recover when failure does occur plus what can we do to anticipate when failure might happen.

Resilience can be more than just bouncing back. Why return to the same place only to be hit again? Early exploitation means finding a better place to bounce to. We call it “swarming” or Serendipity if an opportunity unexpectedly arises.

Safety-II isn’t about ‘best practice’
“Best” practice does exist but only in the Obvious domain of the Cynefin Framework. It’s the domain of intuition and the Thinking Fast in Daniel Kahneman’s book Thinking Fast and Slow. What’s the caveat with best practices? There’s no feedback loop. So people just carry on as they did before.  Some best practices become good habits. On the other hand, danger can emerge from the baddies and one will drift into failure.

Safety-II and Resilience is about catching yourself before drifting into failure. Being alert to detect weak signals (e.g., surprising behaviours, strange noises, unsettling rumours) and having physical systems and people networks in place to trigger anticipatory awareness.

Safety-II isn’t what ‘we already do’
“Oh, yes, we already do that!” is typically expressed by an expert. It might be a company’s line manager or a safety professional. There’s minimal value challenging the response.  You could execute an “expert entrainment breaking” strategy. The preferred alternative? Follow what John Kay describes in his book Obliquity: Why Our Goals are Best Achieved Indirectly.

Don’t even start by saying “Safety-II”. Begin by gathering stories and making sense of how things get done and why things are done a particular way. Note the stories about doing things the right way. Chances are pretty high most stories will be around Safety-I. There’s your data, your evidence that either validates or disproves “we already do”. Tough for an expert to refute.

Safety-II isn’t ‘them and us’
It’s not them/us, nor either/or, but both/and.  Safety-I+Safety-II. It’s Robustness + Resilience together.  We want to analyze all of the data available, when things go wrong and when things go right.

The evolution of safety can be characterized by a series of overlapping life cycle paradigms. The first paradigm was Scientific Management followed by the rise of Systems Thinking in the 1980s. Today Cognition & Complexity are at the forefront. By honouring the Past, we learn in the Present. We keep the best things from the previous paradigms and let go of the proven myths and fallacies.

Safety-II isn’t just about safety
Drinking a cup of coffee should be a total experience, not just tasting of the liquid. It includes smelling the aroma, seeing the Barista’s carefully crafted cream design, hearing the first slurp (okay, I confess.) Safety should also be a total experience.

Safety can emerge from efficient as well as effective conditions.  Experienced workers know that a well-oiled, smoothly running machine is low risk and safe. However, they constantly monitor by watching gauges, listening for strange noises, and so on. These are efficient conditions – known minimums, maximums, and optimums that enable safety to emerge. We do things right.

When conditions involve unknowns, unknowables, and unimaginables, the shift is to effectiveness. We do the right things. But what are these right things?

It’s about being in the emerging Present and not worrying about some distant idealistic Future. It’s about engaging the entire workforce (i.e., wisdom of crowds) so no hard selling or buying-in is necessary.  It’s about introducing catalysts to reveal new work patterns.  It’s about conducting small “safe-to-fail” experiments to  shift the safety culture. It’s about the quick implementation of safety solutions that people want now.

Signing off and heading to Starbucks.