Category Archives: Strategy

Radical Innovation in Mining Management: The Information Age – Myth 1 Reinforcement and Myth 2 Domination

The following article was published on 2019 July 24 at  http://www.austmine.com.au/News/radical-innovation-in-mining-management-article-2-the-information-age-myth-1-reinforcement-and-myth-2-domination

Austmine is the leading industry body for the Australian Mining, Equipment, Technology and Services (METS) sector. 

In the last two editions, we discussed how yesterday’s solutions have led to three myths that control current mining thinking.

Myth 1: The best way to run a mine is to focus on cost certainty and manage people as if they are parts of a machine.

Myth 2: Mine operations should be optimised from start to finish to produce the best results.

Myth 3: We can achieve social licence acceptance and safety aims within our current management paradigm by pursuing effective culture change.

This article explains the second “yellow bubble” we find ourselves in today.

The Information Age is so labelled due to the advent of powerful computer hardware and software information technology. Myth 1 paradigms understandably were hardcoded by ERP system designers as prevailing business practices. One cost certainty assumption is that savings across all areas are additive, a cent saved in a department is a cent saved to the overall organisation. This belief meant performance targets could be set at the department level. With information readily accessible with a click of a computer button, micro-management has flourished. We’ve seen the budgeting process used to approve and control departments with just enough capacity to run to average demand. Also observed is the setting up of accounting based KPIs using historical budget numbers. This is a dangerous assumption carried over from the Industrial Age – The past repeats and can predict the future.

Systems Thinking in the Information Age has emphasised optimising business processes from a start to finish, end-to-end perspective. However, clashes between process and technology have surfaced. It’s common during implementation to force organisation process changes to suit the “best practices” built into the software structure. After the IT system goes live, making subsequent software coding changes is extremely difficult. User groups are asked to wait patiently for the next “real soon” version. And perhaps pay for the upgrade. It’s intriguing to learn from resourceful employees how their spreadsheet workarounds keep work flowing but are discreetly hidden from prying eyes.

The mix in today’s Yellow Bubble

In Article 1, we noted the global decline in employee engagement as well as the decline in mining productivity, which started 15 years ago. These sobering findings are corroborated by the McKinsey graph[1], which found that global mining productivity overall has decreased by 29% over the last decade. From 2014 to 2016 McKinsey’s Mine Lens reported a 2.8% per annum uptick in overall mining productivity. Two main trends underlie these modest gains: a 3% annual reduction in headcount and tightly controlled capital spending and expenditures for non-labour operations.

Mining technologies are being heavily promoted as today’s solution. Despite great promise in digital advancements, many companies are struggling to embrace tech-enabled transformation. One growing fear is that humans are becoming more and more subservient to technology. Instead of technology enabling humans to perform well, the inverse is occurring.

Let’s take a closer look at the people side of mining. Myth 1, combined with Myth 2, has led to three organisational conflicts:

1. Top Management seeking to achieve “work-as-reported” company performance targets.

2. Centralised administrators trying to optimise a “work-as-imagined” end-to-end process.

3. Operations managers and supervisors attempting to increase local “work-as-done” productivity.

We will address the origins of each conflict and how to resolve by “mining differently.” 

1. Classical Management Theory

Besides Frederick Taylor, two other Industrial Age thinkers influenced the running of organisations, Max Weber and Henri Fayol. Together the three pioneers formed what is known as Classical Management Theory.

Max Weber focused at the highest level with his Bureaucracy doctrine. His work addressed the problem of factory work attracting untrained rural farmers to urban cities. The master/apprentice craft model was ill-fitted to handle high demand and volume from technology-driven industrialisation. Company owners and directors welcomed Weber’s solution with open arms.

The impact of his contribution is summed up by Gary Hamel:

“Most of us grew up in and around organisations that fit a common template. Strategy gets set at the top. Power trickles down. Big leaders appoint little leaders. Individuals compete for promotion. Compensation correlates with rank. Tasks are assigned. Managers assess performance. Rules tightly circumscribe discretion. This is the recipe for “bureaucracy,” the 150-year old mashup of military command structures and industrial engineering that constitutes the operating system for virtually every large-scale organisation on the planet.”[2]

French Mining Engineer Henry Fayol gave his attention to the middle layer, the managerial class.  He laid down 14 principles of management for improving overall administration and how managers would control the internal activities of the company. Fayol’s 14 management principles are accepted as a Manager’s approach and the foundation for Administrative Science. In contrast, Taylor’s Scientific Management is termed an Engineer’s perspective oriented on production and operations at the lowest level.

They collectively reinforced the view that organisations were functional machines controlled to deliver efficiency and productivity. Fayol declared there must be a proper place for everything as well as each thing must be in its appointed place. He described how control would be executed.

“An employee will receive orders from one boss only.” (Unity of Command)
“All the organisational units should work for the same objectives through coordinated efforts.” (Unity of Direction)
“Individual or group interest are sacrificed or surrendered for general interest.” (Subordination)
Ultimate accountability flowed hierarchically to the very top characterised by US President Harry Truman’s famous phrase: “The buck stops here.”

Here’s the vertical rub. Top Management sees work-as-reported measured against corporate performance targets typically set by Weber’s bureaucrats. Fayol’s middle managers idealistically plan work-as-imagined using the available resources. Taylor’s operational managers create work-as-prescribed, limited by imposed regulations, standards, rules. The front-line workers perform work-as-done after adapting to daily variability and interdependencies. Operations are sensitive to how their abilities are scrutinised, so what is communicated up the line is work-as-disclosed. Systemic problems multiply and remain unresolved. Eventually, a tipping point is reached, and catastrophic failure occurs. Frequently the CEO is the last one to find out and the first to mutter “Why wasn’t I informed”?

Systems thinking offered a “horizontal” view to describe and understand how organisations work. While the buck stops here, the work flows horizontally. Where best practices prescribed only one right way for efficiency, from systems thinking arose the possibility of more than one correct answer. Choice was a novel idea, and the goal was to find an optimal solution from a range of choices for effectiveness.

Scientific Management evolved into the discipline of Engineering. The paradigm was straightforward. Envision an idealistic future state and design a perfect system working linearly backwards from finish to starting point. Use standardised project and change management practices to build, operate, and maintain the orderly flow through the parts of the system (technology, process, people).  Measure deviations from control norms and fix to get back on track.

It’s not difficult to find companies today operating under Classical Management theory, practising Myth 1, and using system thinking tools developed in the Information Age. They still think their organisations should operate like integrated machines comprised of working parts that fit together seamlessly, like Henry Ford’s Model T automobile.

“In this machine view, organisations should be designed to run like clockwork. Organisational structures should follow rules that determine where resources, power, and authority lie, with clear boundaries for each role and an established hierarchy for oversight. When decisions require collaboration, governance committees should bring together business leaders to share information and to review proposals coming up from the business units. All processes should be designed in a very precise, deliberate way to ensure that the organisation runs as it should and that employees can rely on rules, handbooks, and priorities coming from the hierarchy to execute tasks. Structure, governance, and processes should fit together in a clear, predictable way.”[3]

What should a mining company do? Dispense with their vertical hierarchy and abandon Classical Management theory? No. We suggest thinking differently.

Think of everything cited in the above quote as a system constraint that is either controlling, governing, or enabling. Think of bureaucracy as a system property that emerges from the blending of constraints. A desirable form of bureaucracy is Stability, a machine that is well oiled and humming; the constraints are in the right proportions, and all are working together. On the other hand, stringent command and control rules, goal conflicts, information gaps are examples that enable an undesirable form to emerge – Extreme bureaucracy. Employees are so tightly restricted they become paralysed, fearful of violating a constraint and being punished. During a change initiative, they are told: “If you don’t change, you will be changed.”

Fortunately, there is a Mining Differently alternative to help find an appropriate balance. In the next article, we will describe an anthro-complexity approach that can reveal constraints causing strained relationships and interactions amongst people at all levels in the vertical organisation hierarchy.

2. The Rise of IT systems – Enterprise Resource Planning

Before the advent of IT, local managers faced the problem of making ill-informed decisions. Moving information was paper-based and painstakingly slow. This typically meant that operations managers and supervisors had to try and run as efficiently as possible within their areas. With an online ERP system, they could now access timely information on local cost and resources. But so could others including Weber’s bureaucratic analysts and Fayol’s middle managers.

“If you can’t measure it, you can’t manage it.”

“Under scientific management,” Taylor wrote, “the managers assume … the burden of gathering together all of the traditional knowledge which in the past has been possessed by the workmen and then of classifying, tabulating, and reducing this knowledge to rules, laws, formulae…. Thus all of the planning which under the old system was done by the workmen must of necessity under the new system be done by in accordance with the law of science.”

Reliance on ERP numbers not only gave the impression of scientific expertise based on “hard” evidence, it also replaced intuitive judgment, the lessons learned from previous experiences. Management demanded more data—standardised KPIs, ratios, statistics. And ERP delivered.

Professor Jerry Muller has coined this questionable managerial pattern “metric fixation.”

“When proponents of metrics advocate “accountability” they tacitly combine two meanings of the word. On the one hand, to be accountable means to be responsible. But it can also mean “capable of being counted.” Advocates of “accountability” typically assume that only by counting can institutions be genuinely responsible. Performance is therefore equated with what can be reduced to standardised measurements.” [4]

Muller describes the damage our obsession with metrics is causing. “In our zeal to instil the evaluation process with scientific rigour, we’ve gone from measuring performance to fixating on measuring itself. The result is a tyranny of metrics that threatens the quality of our lives and most important institutions.”

As the Information Age advances into Big Data analytics, there is the digital vision of predictive algorithms replacing the need for human decision-making. What will happen if mining operation algorithms are implemented based on myths and fallacies?

What should a mining company do?  Drop ERP reporting? Eliminate KPIs and performance targets? No. We suggest thinking differently.

ERP data should augment the decisions made by humans. Software packages are a communication and organisation tool. They provide content to answer the “who, what, when, where” queries. But not the “why” question because they are unable to capture context. While it can offer helpful insights into existing work conditions, they can’t capture the non-quantifiable emotional and irrational factors humans use to make decisions. While it seems more straightforward to trust the data, it’s better to trust human judgment.

Wait a minute. Can we trust humans? Well, it depends on the organisation’s system constraints, in this case, performance management. A common practice involves setting annual goals for employees and turning measures into numerical targets to achieve.

Anthropologist Marilyn Strathern’s paraphrasing of Goodhart’s Law is a clear warning of the downside of measures on human behaviour. Be careful what you wish for. Humans are skilled at “gaming” an incentive system to earn monetary (pay-for-performance, safety bonus) or reputational (rankings) rewards. The needs of self can take priority over the needs of the many. Employee stories gathered with an anthro-complexity approach can shed light on unhealthy stress due to metric fixation constraints.

Success or failure is not created nor controllable; it is an emergent outcome of the system. Thinking differently means letting go of holding individuals accountable for results they have no control over. Don’t blame the person; blame fixes nothing. Put the onus on the system. Treat KPIs not as scoreboard targets but as dashboard gauges monitoring progress.

In systems thinking where there is more than one right answer, choosing an optimal solution sounds reasonable. Therefore, it seems perfectly logical to believe Myth 2: “Mine operations should be optimised from start to finish to produce the best results.”  But it’s false. Paradoxically the opposite is true. Eli Goldratt in his Theory of Constraints has mathematically proved:

“The closer you are to a balanced capacity chain, the closer you are to bankruptcy.” 
“If you want to make money, most of your resources must be idle from time to time.“

3. The Theory of Constraints

Myth 2 is a violation of the Theory of Constraints. TOC is a management paradigm created by Eli Goldratt. He viewed any manageable system as being limited in achieving more of its goals by a small number of constraints. Constraints include material, equipment, vehicles, people, policies, rules. TOC constraints are typically viewed as restricting/controlling but can also be governing and enabling. For examples, policies can be considered enabling because they reduce the burden of too many choices down to 1 so one can quickly move into action mode.

Envision a mining operation as a chain of links linearly connected. TOC focuses on the “weakest link” in the chain – the bottleneck where in-out capacity is the worst. Other links that are not bottlenecks are permitted to be (and have to be) “underutilised resources.”

Not surprisingly, “underutilised resource” raises red flags. The C-suite and Weber’s bureaucrats are concerned that making such metrics public (transparency) could lead to shareholders questioning how the company is carrying out the mandated mission (accountability). It does take some technical understanding and time to explain TOC theory adequately.

Fayol’s middle managers worry about the optics of poor resource management highlighted by ERP reports. Local operations managers fear being punished for not meeting annual performance targets. So they try to optimise their own productivity and look busy. This exerts more pressure on the bottleneck link, aggravates their situation, and decreases overall chain production flow. As stated earlier, the needs of self can take priority over the needs of the many.

What should a mining company do?  Ignore TOC? Stop ERP reporting? No. Once again, we suggest thinking differently.

Myth 2 infers ERP systems will accentuate TOC violations if incorrectly utilised. Recall what Taylor said: “…all of the planning which under the old system was done by the workmen must of necessity under the new system be done by in accordance with the law of science.” The Theory of Constraints is a law of science. Also be mindful that software packages are tools, not solutions. Therefore, change the ERP business rules to support TOC.  

It’s time that the fighting ceases between central and operational levels. When ERP rules are set for end-to-end optimisation, conflicts with local managers are generated when centralised analysts report less than optimal performance and even idleness. So set the ERP rules to manage the TOC constraint. Adjust ERP rules to realise whatever rate of production can pass the bottleneck constraint is the ultimate rate that can pass through the whole chain.

Goldratt’s Drum, buffer, and rope approach makes TOC a simple system to implement. Use ERP to schedule upstream resources to keep the buffer full using a mix of forward and backward scheduling. Schedule downstream resources always forward from the output of the constraint.[5] If you are confronted with the ERP rules that can’t be modified, insist on changes unless you agree humans should be subordinated to technology.

Engage local Management and workers to determine where the bottleneck is. Develop ERP reports to support optimisation of the bottleneck. Stability of flow of the chain as a whole is the clear objective. We have demonstrated in over 85 interventions with the Productivity Platform (PP)  that Stability at a higher level is within reach. In a PP engagement with a large mining company, we opened their eyes by showing them how their planning practice was slowing down production. They were planning their operations on a balanced capacity chain as prescribed by Myth 2. With “just enough of everything”, production output was below target and highly unstable. Their next step was to switch to optimised flow. With the TOC adjustments, they generated more than 20% average output increase of tons mined within four months and without increasing capital expenditures.

PP is a change platform designed around the principle of flow. Workers meet daily in a flow room. They focus on the key resources that determine revenue flow, while the rest of the system is set up with adequate protective capacity and buffers to protect the revenue flow. 
The Flow Room provides visual feedback on the processes workers are responsible for and shows them how their actions affect the overall system and the outcomes. It highlights problem areas in these processes and allows for dialogue around these processes. Management and workers simultaneously become aware of problems in the system, and constraining conditions can be addressed on the spot.

PP deploys Agile techniques like Scrum and Kanban. A drift toward Extreme bureaucracy is held in check. The anthro-complexity approach creates a flow room that is psychologically safe for people to speak up and share good and bad experiences.

A considerable benefit is shifting the local manager’s role from being full-time in charge to supporting the team in the background. With time freed up, the manager can attend to working on internal end-to-end collaboration and external social license to operate issues.

It’s time to rethink incentive paradigms and the why, what and how human performance is rewarded by the organisation. Don’t be ruled by the tyrannical mindset that the path to success is quantifying the results and doling out rewards & punishment based on the numbers. Personal success is not created by the individual but emerges from others delivering favourable consequences. Metrics can be beneficial if used to complement rather than replace judgment based on personal relationships and experiences. Put a spotlight on shared learning. Make it deliberate and fun. And don’t screw it up with incentives.

Safety in the Information Age

Classical Management Theory continues to be prominent in shaping safety thinking. Safety-I bureaucracy can grow uncontested, especially with government regulators demanding more auditing and compliance with rules. Safety industry suppliers are cashing in with software dedicated to automating safety inspection processing and reporting.

Human Factors developed as a professional discipline as a response to Classical Management Theory. Business process improvements, coupled with technological advancements, have made operating systems more complicated to manage. And when downtime occurs, it’s challenging to fault human error when there are so many moving parts. As James Reason said in 1990:

“Rather than being the main instigators of an accident, operators tend to be the inheritors of system defects created by poor design, incorrect installation and bad management decisions. Their part is usually that of adding the final garnish to a lethal brew whose ingredients have been long in the cooking.”[6]

What should the mining company do?  Increase bureaucracy to prevent accidents? Find fault, blame and punish when accidents happen? No. You’ve read it before… think differently.

Thankfully, accidents rarely happen. And yet, organisations spend enormous amounts of time, effort, and money on so little data. As a result of this foolishness, a new view called Safety-II has emerged to examine when things go right, which is most of the time.  The idea of ‘performance variability’ was born; a human is not a hazard but a hero who could adapt performance in response to changing conditions in the work environment. A scrum meeting in the PP flow room is an example of local workers adapting successfully to keep the line running. 

Safety professionals in mining companies who don’t perform myth-busting due diligence on new technology proposals are not fulfilling their role. Be proactive and ask if humans are expected to behave perfectly like machines and how the technology allows for human error. Humans are fallible. Even the best people make mistakes. Safety by design means humans enabled by technology, not the other way around.

There have been 6 mining deaths in the past 12 months in Queensland, making it the worst year for mining deaths since 1997. Mines Minister Anthony Lynham announced two reviews into the industry. A review into incidents on coal mines will be expanded to include mineral mines and quarry sites as well as all deaths on mines over the past two decades. A second and separate review is being led by the University of Queensland, which will review state mining health and safety legislation in light of emerging mine technology and practices. Both reviews are expected to be completed by the end of this year and tabled in Parliament. We hope that they will look at the sharp end and not focus primarily on high-level issues at the Blunt End of the safety spear.

Mining companies should focus on the Sharp End, where the workers are. It’s the end with the highest injury potential but the least amount of influence. But let’s make it different. Adopt an anthro-complexity approach to safety, which gives workers significant influence over the system. 

Using the power of everyday stories from workers, we have the capacity to detect potential situations before they fail. The early warning system enables a worker to tell a story “Hey! I have a bad feeling about this.” rather than a story “Damn! I knew it was going to happen.”

How management responds matters to workers. To build mutual trust, “sense-making” tools operate in real-time so mitigating action can be taken immediately.  Stories can be collectively analysed to identify system constraints which influence how people behave. The set of stories represents the organisation’s safety culture. If we can shift the type of stories willingly shared (i.e., more stories like “Hey…” and fewer like “Damn…”, then we have a way to change the safety culture positively.

In the next article (Radical Innovation in Mining Management- Article 3), we examine Mining Differently in the Ecology Age. Complexity Thinking takes Mining beyond Systems Thinking. The whole is greater than its parts and includes the social license community. And a culture change myth is born.

Thank you for the feedback and enthusiastic show of support. A 1-hour webinar titled Mining Differently is scheduled for August 29. This webinar will illustrate the practical problems caused by Myths 1 and 2 and how mines have successfully dealt with them.

Planned for September 25 is a follow-up Webinar – Mining Differently Part 2. This will deal with Ecology age issues: social licence to operate, safety and cultural aspects that are rising in importance.

We are also conducting 1-day Mining Differently workshops on October 31 (Sydney) and November 8 (Brisbane). To be added to our invitation list, please contact Hendrik Lourens at hendrik@stratflow.com.au.

Written by Gary Wong and Hendrik Lourens

References

1. Behind the mining productivity upswing: Technology-enabled transformation, McKinsey, 2018.
2. Bureaucracy Must Die. Gary Hamel, HBR, Dec 2014.
3. Agility: It Rhymes with Stability. McKinsey Quarterly, Dec 2015
4. Tyranny of the Metrics, Jerry Z. Muller, 2018.
5. ERP Software and The Theory of Constraints, Tom Miller, Jan 2014. http://bit.ly/2XU3few
6. Human Error, James Reason, 1990.


Radical Innovation in Mining Management: The Industrial Age fuelled by Myth 1

The following article was published on 2019 June 27 at http://www.austmine.com.au/news/radical-innovation-in-mining-management-1  Austmine is the leading industry body for the Australian Mining, Equipment, Technology and Services (METS) sector. 

In the last edition we introduced how yesterday’s solutions have led to three myths that control current mining thinking.

Myth 1: The best way to run a mine is to focus on cost certainty and manage people as if they are parts of a machine.

Myth 2: Mine operations should be optimised from start to finish to produce the best results.

Myth 3: We can achieve social licence acceptance and safety aims within our current management paradigm by pursuing effective culture change.

Each myth began as a solution for a specific era of time.

Image

A myth follows a life-cycle S-curve pattern. It slowly begins as a new idea in the embryonic stage. A growth spurt occurs when people embrace the idea; the adoption rate rapidly increases. A myth can perpetuate for many years, decades, centuries. As time passes and the myth matures, it succumbs to changes in society, technology, and environment. Methods founded on the myth struggle to solve prevailing problems. Different solutions emerge, some based on research breakthroughs and some unfortunately based on pseudoscience. This crisis period is pictorialised by the “yellow bubble.” As the myth is still the dominant paradigm, myth protectors attempt to maintain the status quo by denying, challenging or crushing the rise of disruptive ideas.

It sounds wise for organisations which are generating big profits to show reluctance to change. Everyone has heard the story about Kodak whose managers didn’t recognise soon enough that digital technology would decimate its traditional business. According to these managers, it’s a myth. They were very aware of the new technology. The failure was not convincing Kodak executives to provide R&D funding. The finance decision-makers did not want anything to disrupt the flow of money coming from film.[1]

For consultants who have created a lucrative business, it’s reasonable to keep “milking the cow.” After all, the myth has not reached the peak yet. Enticing spinoff solutions are sold to clients such as “train the trainer” to institutionalise the myth and strengthen the consulting relationship. Late maturity is often marked by a professional certification program with stepped levels of knowledge attainment. Learn all there is to know and earn a badge. But it’s also a signal the declining stage of the S-curve is nearing.

Others realise earlier in the life-cycle the ground beneath is dramatically shifted. They appreciate the myth’s thinking has been valuable and still delivering results. However, they also know why clients are staying awake at night thinking about unresolvable problems. As Peter Senge said: “Today’s problems come from yesterday’s solutions.” It’s time to “jump the S-curve” and explore what the next Age and its solutions has to offer.

Our intent is to not criticise the past by searching for root cause, blaming someone, but learning from it. We have the pleasure of hindsight bias. In this article we will turn back the clock to see what made logical sense as the Industrial Age unfolded. We delve deeper into Myth 1 and the problems it creates today.

Industrial Age Myth: The best way to run a mine is to focus on cost certainty and manage people as if they are parts of a machine.  

The Industrial Age was a golden period of growth, expansion and productivity increase. The big idea in the early 20th century was Frederick Taylor’s Scientific Management principles of productivity.

Two quotes from Taylor illustrate the managerial thinking at the time.

Image
Image

Order, structure, and discipline emanated from Taylor’s beliefs. Industrial giants like Henry Ford implemented the machine assembly line and production flow concepts into manufacturing. In line with this thinking was Ford’s “You can have any colour of car as long as it is black.” Certainty meant dealing with “known knowns” and working with proven cause & effect relationships.

The Industrial Age birthed statistics and statistical theory. The first control chart appeared in 1924. People schooled in Scientific Management developed the new method of statistical process control (SPC). However, it wasn’t successfully implemented in a business setting until the 1950s. Other cost certainty methods that followed included cost accounting, activity-based costing, inventory management, zero-based budgeting, material requirements planning (MRP).

Academic professors turned consultants chimed in with business research applying a case study approach. It didn’t take long before project managers were writing a business case complete with a benefit-cost analysis.

In conjunction with improving assembly line operations was the formal organisation of people. Academics and big consulting firms introduced an idea dating as far back as Plato – Division of Labour. A managerial class would separate decision making from the doing of work, a strategy visible in the institutions of church and military at the time. The schema took root and easily spread in a relatively stable, repeatable, and predictable work environment.[2]

An early adopter was General Motors who implemented the divisional organisation in response to the car market demanding greater variety and choice. Cost accounting was used to calculate transfer pricing and keep the system coherent. This made sense because 85-90% of the value of an item sold could be attributed to variable costs (direct labour and raw material).[3]  As engineering, financial and marketing functions grew to satisfy the evolving market, by the late 1990s only 30-40% of costs were truly variable. However, management thinking stayed the same and fixed overhead costs were allocated by various means. This started to skew decision making, but few noticed.

To keep the assembly line running smoothly, engineers, accountants, and process analysts closely tracked what went wrong. Control was about minimalising deviations and stoppages like machine breakdowns, equipment failures, supply shortages. Failure analysis extended to the treatment of front-line workers. Processes were designed with humans performing “perfectly” without errors. Mistakes and absenteeism were not tolerated and often led to loss of pay punishment or outright dismissal.

In 1936 Charlie Chaplin wrote and directed the film “Modern Times.” While billed as a comedy, the film captured the painful working conditions shaped by the efficiencies of modern industrialisation.

The rise of unions

Counterbalancing the heavy-handed treatment of workers was the rise of labour unions. Work stoppage was the economic weapon. Not all strikes were confined to internal struggles between workers and management; politicians and even military troops were drawn into the picture. In Australia the 1949 coal miners strike saw 23,000 workers withdraw their labour between June 27 and August 15 of that year[4]. The dispute dominated Australian politics at the time and saw elements of revolution and counter-revolution which had been a rarity on Australian soil. Labour unrest shook the once stable work environment. The assumption that humans behaved in a predictable manner like machines was thrown into doubt.

The non-union managerial class was also subjected to command & control. HR produced job descriptions which included new terms such as roles & responsibilities, accountability, transparency, blameworthiness. Management by Objectives (MBO) was popularised by Peter Drucker in his 1954 book The Practice of Management. It surfaced as a system to measure managerial performance. Pressure was applied by setting annual KPI targets and stretch objectives for individuals aspiring to climb the corporate ladder.

TQM and PM

During this crisis period in the Industrial Age humans strengthened by union solidarity reacted to being poorly treated as cogs of a cost-driven industrial machine and demanded changes in working conditions.

Total Quality Management emerged as one “yellow bubble” solution. Pioneers Edwards Deming, Joe Juran, and Phil Crosby led the advancement of TQM. They are also credited for developing Project Management as a discipline. Progressive companies adopted TQM as their way of overseeing all activities and tasks needed to maintain a desired level of excellence. Instead of mainly looking inward for efficiency improvements, TQM promoted the idea of looking outward and achieving customer satisfaction.

Themes in Deming’ s PDCA cycle were continuous improvement, waste reduction, and customer loyalty. Quality was measured in financial terms. Improvements in waste management, production control, and increased sales from happy customers were calculated in terms of budgetary impact.

Juran applied the 80/20 Pareto Principle to prioritise quality issues. A major contribution was highlighting the human side of TQM. He stressed the importance of education, training, and understanding resistance to change.

Crosby’s philosophy was “do it right the first time”. He coined the term Zero Defects. Eliminate errors. Avoid time-consuming and costly failure fixes.

Many organisations did not get excited about TQM and saw it as a passing fad. They chose to remain entrenched in cost certainty mode and placed attention on finding more ways to reduce expenses.  Consultants were more than willing to help and offered innovations such as unbundling, outsourcing,  replacing labour with automation, and optimising supply chains.

Not everyone was in favour of Zero Defects. Detractors deemed the assumption human error is avoidable as unrealistic and unattainable. In the safety industry a similar assumption is that all injuries are preventable. The worry is putting a strain on worker performance and morale.

Not everyone was in favour of focusing on the customer. In 1976 a controversial idea that shareholders owned the firm and the true purpose of management was to maximise shareholder value. SVA[5] became the rallying cry for CEOs and financial markets who would benefit most from the paradigm. A major player was Jack Welch while CEO at General Electric. Not quite calling it a myth, upon reflection in 2011 he called SVA “the dumbest idea in the world.” [6] He questioned why do CEOs and their top managers receive massive incentives to focus most of their attention on the expectations market, rather than the real job of running the company producing real products and services.

Lessons learned from the Industrial Age

Behind all ideas are good intentions. But so are unintended negative consequences. What results have been accomplished? What have we discovered and learned from the Industrial Age?

Mining Productivity

Australian mining experienced a resource boom in the Industrial Age. In the early 1960s, discoveries of new metals led to a resurgence of interest in Australia’s mineral resources. Production also increased and Australia became a major raw materials exporter, especially to Japan and Europe.

Today Australia is one of the world’s leading mineral resources nations. It is the world’s largest refiner of bauxite, producer of gem and industrial diamonds, lead and tantalum, and the mineral sands ilmenite, rutile and zircon. Other world rankings in production are: zinc (2nd); gold, iron ore and manganese ore (3rd); nickel, aluminium (4th); copper, silver, black coal (5th). [7]

It seems odd that Australia’s enviable position has been accomplished with productivity levels that have been trending downwards. According to Ernst & Young[8] capital productivity in Australia has fallen 45% since 2000. Perhaps it’s because Australia hasn’t been alone in the worldwide decline. E&Y reported labour productivity in the South African gold sector dropping by 35% since 2007.

Image

These sobering findings are corroborated by McKinsey [9] which found that global mining productivity overall has decreased by 29% over the last decade. From 2014 to 2016 McKinsey’s Mine Lens shows a 2.8% per annum uptick in productivity, but productivity is still far below the level 15 years ago.[10]

Employee engagement

Organisations consider employee engagement an important indicator of company health. Engaged employees offer their talents and energy to work efficiently and effectively. Actively disengaged workers, on the other hand, look around for ways to ignore or damage the best interests of the organisation. Galluphas been measuring employee engagement across the world for many decades.

“Worldwide, the percentage of adults who work full time for an employer and are engaged at work — they are highly involved in and enthusiastic about their work and workplace — is just 15%.

“They imply a stunning amount of wasted potential, given that business units in the top quartile of Gallup’s global employee engagement database are 17% more productive and 21% more profitable than those in the bottom quartile”.

For Australia/New Zealand the 2013 report identifies 24% or workers as highly engaged and 16% actively disengaged.[11] In the 2017 survey the highly engaged number dropped to 19%.[12] 

Compounding the employee engagement problem is anecdotal evidence that millennials do not see mining as a promising career. Jake Klein, CEO of Evolution Mining stunned many attending the 2019 Future of Mining Conference in Sydney by informing there are only 25 mining engineers enrolled in Australian Universities[13]. He sees the biggest challenge is making mining an attractive industry for young people.

Klein’s concern reinforces a view expressed by the World Economic Forum (WEF).[14]Business leaders say that attracting, managing and retaining a skilled workforce is their number one business challenge in the next five years. WEF research showed better benefits, more accessible savings plans, and guidance and technology tailored to individual needs would have a very positive impact on a workforce.

Despite the clear message, Myth 1 continues to be played out today. Permanent employee levels are contained or shrunk by using contracted labour and outsourcing (parts of a machine). Financial actions such as switching employer-employee shared pension plans from defined benefits to market-based enhance cost-certainty and shift the risk of retirements fund sufficiency from the company to the individual. When workers opt out on corporate buy-out and early retirement programs, the labour cost savings are highlighted but neglected are the non-monetary losses in tacit knowledge and experience.

Implications of running mines according to Myth 1

History has taught us that Myth 1 has created “wicked” problems for the mining industry. Wicked problems are difficult or impossible to solve because of incomplete, contradictory, and changing requirements that are often difficult to recognise. [15] 

Some industry pundits believe that poor productivity and employee engagement are two sides of the same coin. Measuring systemic productivity while enforcing individual accountability injects disharmony into the organisation and reaps diminishing returns.  How does this happen? Boston Consulting group partner Yves Morieux[16] explains: 

“…this drive for clarity and accountability triggers a counterproductive multiplication of interfaces, middle offices, coordinators that do not only mobilise people and resources, but that also add obstacles. And the more complicated the organisation, the more difficult it is to understand what is really happening. So we need summaries, proxies, reports, key performance indicators, metrics. So people put their energy in what can get measured, at the expense of cooperation. And as performance deteriorates, we add even more structure, process, systems. People spend their time in meetings, writing reports they have to do, undo and redo. Based on our analysis, teams in these organisations spend between 40 and 80 percent of their time wasting their time, but working harder and harder, longer and longer, on less and less value-adding activities. This is what is killing productivity, what makes people suffer at work. 

We need employees to cooperate, to trust their coworkers and managers. It is to take a risk, because you sacrifice the ultimate protection granted by objectively measurable individual performance. It is to make a super difference in the performance of others, with whom we are compared. It takes being stupid to cooperate, then. And people are not stupid; they don’t cooperate.” 

Safety in the Industrial Age

Ever wonder why Safety is a cost item in a budget? We hear platitudes that an organisation’s greatest asset is its employees. Yet instead of an investment, they are entered as expenses on the Profit & Loss statement. No different than a replaceable part in a machine.

“Safety-I” was coined by Erik Hollnagel[17] to reflect the mechanistic treatment of humans in the Industrial Age. Safety is defined as the absence of negative events. Humans are error prone, focus on what goes wrong, and the ideal target is Zero Harm, a logical extension of Zero Defects thinking.

Surrounded by scientific management principles, the beginnings of Safety as a practice intuitively mirrored the patterns of business and the avoidance of human failure. In 1931 Herbert Heinrich published his book “Industrial Accident Prevention, a Scientific Approach.” [18]  The book cited 88 percent of all workplace accidents and injuries/illnesses are caused by “man-failure.” More famous is Heinrich’s Law: that in a workplace, for every accident that causes a major injury, there are 29 accidents that cause minor injuries and 300 accidents that cause no injuries. Alas, Fred Manuele disclosed in his 2011 review, it’s a myth.[19]

Learning to let go

The Y-axis of the Life-cycle diagram is labelled “Utility of the Paradigm” for a good reason. A subsequent age doesn’t start from zero but is elevated by the previous age. That means we carry forward the valuable lessons and practices and adapt them to the next emerging Age. And just as important, we let go of the myths and fallacies of the old Age.

In the next article we examine the radical thinking in the Information Age. Scientific Management yields to Systems Thinking. An Engineering paradigm emerges. And a new myth is born.

Thank you for the feedback and enthusiastic show of support. In response to the interest, we are conducting 1-day Radical Innovation in Mining Management workshops on October 31 (Sydney) and November 8 (Brisbane). To be added to our invitation list, please contact Hendrik Lourens at hendrik@stratflow.com.au .

Written by Gary Wong and Hendrik Lourens

References

[1]       The Real Lessons From Kodak’s Decline, MITSloan Management Review Magazine: Summer 2016.

[2]       Freedom from command and control, John Seddon, Productivity Press, Kindle, 2005.

[3]       Profitability with no boundaries, Reza M. Pirashteh and Robert Fox, American Society for

Quality, 2011.

[4]       Australian Coal Strike https://en.wikipedia.org/wiki/1949_Australian_coal_strike

[5]       Theory of the Firm: Managerial Behavior, Agency Cost and Ownership Structure, Michael Jensen and William Meckling, Journal of Financial Economics, 1976.

[6]       The Dumbest Idea In The World: Maximising Shareholder Value, Steve Deming, Forbes, Nov 2011.

[7]       History of Australia’s Minerals Industry. http://www.australianminesatlas.gov.au/history/index.html

[8]       Productivity in Mining: Now comes the hard part, Ernst & Young, 2016.

[9]       Productivity in Mining Operations: Reversing the downward trend, McKinsey, 2015.

[10]   Behind the mining productivity upswing: Technology enabled transformation, McKinsey, 2018.

[11]   State of the Global Workplace, Gallup, 2017.

[12]   State of the Global Workplace, Gallup, 2013.

[13]   Future of Mining Australia 2019, Jake Klein. https://www.youtube.com/watch?v=0cw0V30gmyk

[14]   Is this the secret to happy and engaged employees? WEF 2018.

[15]   Wicked Problem, Wikipedia.

[16]   Smart Rules: Six ways to get people to solve problems without you, Yves Morieux, Harvard Business Review, September 2011.

[17]   Safety I and Safety II: The Past and Future of Safety Management, Erik Hollnagel, 2014.

[18]   Industrial accident prevention, H.W. Heinrich, McGraw Hill, 1931

[19]   Reviewing Heinrich: Dislodging Two Myths From the Practice of Safety, Fred Manuele, Oct 2011, Professional Safety, www.asse.org

Radical Innovation in Mining Management: Introduction

The following article was published on 2019 May 06 at http://www.austmine.com.au/news/radical-innovation-in-mining-management-1  Austmine is the leading industry body for the Australian Mining, Equipment, Technology and Services (METS) sector. 

Introduction

Social license, Digital transformation, Safety, and Profitability – four issues that seem to be caught in a perpetual trade-off. Spend more time on one, and it reduces the attention given to the others. It’s a problem that impacts everyone in the organisation:

  • As an executive, you’re frustrated with events that unexpectedly emerge to disrupt production.
  • As a manager, you’re angry when corporate performance statistics don’t reflect the tireless effort required to keep things running locally.
  • As a supervisor, you’re frustrated with fault finding in people and blaming individuals for mediocre results that are beyond their control.
  • As an engineer, you’re disillusioned by past innovative programs that started with a bang and then either withered away or had the budget pulled. 
  • As a tradesperson, you can sense worsening mine conditions but feel powerless in voicing your concerns.

Have we reached a plateau in our ability to improve on each of these issues individually as well as collectively? Peter Senge claimed: “Today’s problems come from yesterday’s solutions”. In this first article, we introduce how yesterday’s solutions have led to three myths that control current mining thinking. In the following months, we will delve deeper into each myth and the problems they create today.

Based on 15 years of consulting experience we believe that radical innovation is essential to disrupt myth domination. “Radical” does not necessarily mean painful and agonising. It means being enlightened that the world has radically changed and the Mining industry needs to catch up.

Three myths that destroy mining innovation

Myth 1: The best way to run a mine is to focus on cost certainty and manage people as if they are parts of a machine.

Myth 2: Mine operations should be optimised from start to finish to produce the best results.

Myth 3: We can achieve social licence acceptance and safety aims within our current management paradigm by pursuing effective culture change.

Each myth began as a solution for a specific era of time. Besides introducing new thinking, an Age carries the best of the previous ages forward while dispensing myths and fallacies with facts and evidence.

The evolution and implications of different Ages

The Industrial and Information Ages

The Industrial Age growth mindset in the early 20th century was fuelled by Scientific Management principles of productivity. The work environment was stable, certain, and predictable. However, it couldn’t last forever when humans revolted over their treatment as mere cogs in a machine. A crisis point was reached resulting in declines in capital, labour, and material productivity.

Systems thinking and Human Factors boosted by computer technology offered new and improved alternatives. A popular solution called Business Process Reengineering succinctly captures the dominant engineering paradigm. This is the Information Age with people, process, and technology as parts of a system. “The whole is equal to the sum of its parts.” Industry rally behind “Faster, better, cheaper” in the pursuit of optimised efficiency. However, all is not well. We observe promising Information Age digital tools yielding negative impacts and making operations extremely complicated. As consultants, we have seen many attempts to optimise across the entire system to achieve efficiency. Some software packages are coded on this premise. However, systems thinkers like Russell Ackoff argue that system capability decreases. Eli Goldratt in his Theory of Constraints mathematically supports Ackoff’s claim.

Figure 1: The development of the Industrial, Information and Ecological Ages.

Figure 1: The development of the Industrial, Information and Ecological Ages.

Implementation of new technologies has not been easy. In 2016 strategic business & technology advisor and internationally best-selling author Bernard Marr wrote in Forbes.com that 25% of technology projects fail outright; 20-25% don’t show any Return on Investment, and 50% need massive re-working by the time they’re finished. From his experience, many projects failed not due to tech problems.  In fact, 54 % of IT project failures were attributed to poor management.

Change Management programs are often deployed as the lever to execute implementation because they focus on changing culture. The belief is cause & effect relationships will apply to people as they do for mechanistic processes and technologies. Great, if valid. However, consulting firms (McKinsey, Connor, Kotter) have reported a dismal 70% failure rate of change management programs. Once again we’ve reached a crisis period and the decline of an Age.

The Ecological Age

In the Ecology Age, confusing dilemmas, ambiguous paradoxes, diverse conflicts are natural occurrences. Mining has become a complex adaptive system. The unexpected emergence of new things means “the whole is greater than the sum of its parts”. For example, when “hot water is poured over dry coffee grinds, aroma as a new thing emerges”. The interaction of two ingredients creates something new. A deeper example happens inside our heads. “Billions of neurons in our brain interact in ways that we cannot fully understand to create a stream of consciousness.” People are no longer viewed as predictable cause & effect machines but are illogical, emotional decision makers. Culture is not a lever but emerges as an outcome of people, process, and technology interacting.

The evolution of Safety through the Ages

Safety has gone through similar paradigm changes. In the Industrial Age, safety was defined as the absence of negative events. Humans are error prone, focus on what goes wrong, and the ideal target is zero harm. The term “Safety-I” was coined by Erik Hollnagel to describe this thinking, one which many organisations still follow.

In the Information Age, new schools of safety sprung to life. One posed humans as problems in a system that could be managed using safety policies, standards, rules and compliance inspections. In the “Safety-II” view, humans are solutions, able to adapt performance due to varying conditions in the work environment.

In the Ecology Age, we accept it’s human nature to be fallible; mistakes will be made.  Less emphasis is given to changing the behaviours of illogical, emotional decision-makers. Instead, emphasis is placed on influencing relationships and interactions and designing systems for imperfect humans. Like culture, safety is an emergent property of a complex adaptive system. Workers don’t create safety. They create the conditions that enable safety to emerge; they can also create the conditions that enable danger to emerge.

Social Licence as an Ecological problem

We can add Social Licence to Operate as another emergent property of a complex adaptive system. SLO involves not just the community in which the mine is situated, it also involves employees, government (through regulation) and societal attitudes at large. In our conversations with mining managers, we hear the lament they cannot free up the time to deal with ecological issues. They are also aware in the Ecology Age the Internet with fast feedback loops empowers people to socially connect and voice their concerns. Failure to place sufficient attention may lead to a tense issue “going viral”. What do we do?

Dealing with Ecological problems

We need to create the conditions where we are able to free up time and high-level manpower to embark on new ways of doing to deal with these ecological issues. To do this we need to understand the myths holding us back, to stop doing much of what is considered best practice and start doing differently. Management’s role in this endeavour is critical.

Based on 15 years of experience and more than 85 mining interventions we believe this is possible.

We look forward to offering our Information and Ecology Age ideas and thoughts in the series of upcoming articles. We shall put Einstein’s quote to the challenge: “We cannot solve our problems at the level of thinking that caused them in the first place.”

Written by:

Gary Wong and Hendrik Lourens – Stratflow

Evolution of Safety

Yesterday I was pleased to speak at the Canadian Society of Safety Engineering (CSSE)  Fraser Valley branch dinner.  I chose to change the title from the Future of to the Evolution of Safety.  Slides are available in the Downloads or click here.  The key messages in the four takeaways are listed below.

1. Treat workers not as problems to be managed but solutions to be harnessed.

Many systems have been designed with the expectation  humans will perform perfectly like machines. It’s a consequence of the Systems Thinking era based on an Engineering paradigm. Because humans are error prone, we must be managed so that we don’t mess up the ideal flow of processes using technologies we are trained to operate.

Human & Organizational Performance (HOP) Principle #1 acknowledges people are fallible. Even the best will make mistakes. Despite the perception humans are the “weakest link in the chain”,  harnessing our human intelligence will be critical for system resilience, the capacity to either detect or quickly recover from negative surprises.

As noted in the MIT Technology Review, “we’re seeing the rise of machines with agency, machines that are actors making decisions and taking actions autonomously…” That means things are going to get a lot more complex with machines driven by artificial intelligence algorithms. Smart devices behaving in isolation will create conflicting conditions that enable danger to emerge. Failure will occur when a tipping point is passed.

MIT Professor Nancy Leveson believes technology has advanced to such a point that the routine problem-solving methods engineers had long relied upon no longer suffice.  As complexity increases within a system, linear root cause analysis approaches lose their effectiveness. Things can go catastrophically wrong even when every individual component is working precisely as its designers imagined. “It’s a matter of unsafe interactions among components,” she says. “We need stronger tools to keep up with the amount of complexity we want to build into our systems.” Leveson developed her insights into an approach called system theoretic process analysis (STPA), which has rapidly spread through private industries and the military. It would be prudent for Boeing to apply STPA in its 737 Max 8 investigation. 

So why is it imperative that workers be seen as resourceful solutions?  Because complex systems will require controls that use the  immense power of the human brain to quickly recognize hazard patterns, make sense of bad situations created by ill-behaving machines, and  swiftly apply heuristics to prevent plunging into the Cynefin Chaotic domain.

2. When investigating, focus on the learning gap between normal deviation / hazard and avoid the blaming counterfactual.

If you read or hear someone say:
“they shouldn’t have…”
“they could have…”
“they failed to…”
“if only they had…”
it’s a counterfactual. In safety, counterfactuals are huge distractions because they focus what didn’t happen. As Todd Conklin explains, it’s the gap between the  black line (work-as-imagined) and the blue line (work-as-done). The wavy blue line indicates that a worker must adapt performance in response to varying conditions. The changes hopefully enable safety to emerge so that the job can successfully completed. In the Safety-II view, this is deemed normal deviation. Our attention should not be on “what if” but “what” did.

The counterfactual provides an easy path for assigning blame. “If only Jose had done it this way, then the accident wouldn’t have happened.”  Note to safety professionals engaged in accident investigations: Don’t give decision makers bullets to blame but information to learn. The learning from failure lessons are in the gap between the blue line and the hazard line.

3. Be a storylistener and ask storytellers:
How can we get more safety stories like these, fewer stories like those?

I described the ability to generate 2D contour maps from safety stories told by the workforce.  The WOW factor is we now can visually see safety culture as an attitudinal map. We can plot a direction towards a safety vision and monitor our progress.  Click here for more details.

Stories are powerful. Giving the worker a voice to be heard is an effective form of employee engagement. How safety professionals use the map to solve safety issues is another matter. Will it be Ego or Eco? It depends. Ego says I must find the answer. Eco says we can find the answer.

Ego thrives in hierarchy, an organizational  structure adopted from the Church and Military. It works in the Order system, the Obvious and Complicated domains of the Cynefin Framework. Just do it. Or get a bunch of experts together and direct them to come up with viable options. Then make your choice.

Safety culture resides in the Cynefin Complex domain. No one person is in charge. Culture emerges from the relationships and interactions between people, ideas, events, and as noted above, machines driven by AI algorithms. Eco thrives on diversity, collaboration, and co-evolution of the system.

An emerging role for safety professionals is helping Ego-driven decision makers understand they cannot control human behaviour in a complex adaptive system. What they control are the system constraints imposed as safety policies, standards, rules. They also set direction when expressing they want to hear “more safety stories like these, fewer stories like those.”

And less we forget, it’s not all about the workers at the front line. Decision makers and safety professionals are also storytellers. What safety stories are you willing to share? Where would your stories appear as dots on the safety culture map?

Better to be a chef and not a recipe follower.

If Safety had a cookbook, it would be full of Safety Science recipes and an accumulation of hints and tips gathered over a century of practice. It would be a mix of still useful, questionable (pseudoscience), emerging, and recipes given myth status by Carsten Busch.

In the Cynefin Complex and Chaotic domains, there are no recipes to follow. So we rely on heuristics to make decisions. Some are intuitive and based on past successes – “It’s worked before so I’ll do it again.” Until they don’t because conditions that existed in the past no longer hold true. More resilient heuristics are backed by natural science laws and principles so they withstand the test of time.

By knowing  the art and principles of cooking, a chef accepts the challenge of ambiguity and can adapt to unanticipated conditions such as missing ingredients, wrong equipment, last-minute diet restrictions, and so on.

It seems logical that safety professionals would want to be chefs. That’s why I’m curious in the study An ethnography of the safety professional’s dilemma: Safety work or the safety of work?  a highlight is “Safety professionals do not leverage safety science to inform their practice.”
Is it worth having a conversation about, even collecting a few stories? Or are safety professionals too time-strapped doing safety work?

The Future of Change Management

At the Organizational Change Network on LinkedIn, Ron Leeman posted an article on the continuing argument about traditional Change Management being “old skool” and that it needs a re-think, an overhaul, some fresh ideas etc. He researched CM methods currently being offered by a handful of leading consulting organizations. His conclusion was apart from how new digital tools can help with some aspects of Change Management, he didn’t think there is a lot of new thinking out there. Rather it looked like just a regurgitation and/or re-naming of previous approaches.
I replied what if there was an emerging change practice that wasn’t a regurgitation but quite different as per the following:
  • What if a change practice emerged that treated all organizations as complex adaptive systems? It would mean escaping the dominant human-imposed Engineering paradigm (faster, better, cheaper)  and setting aside age-old tools such as reductionism, benchmarking, future state visioning, cause & effect analysis, linear road mapping, surveys, and yes, even metrics to a certain degree.
  • The change practice would be built on an Ecological paradigm applying ideas and words such as Anthro-complexity, Cynefin, Liminality, Morphogenesis, enabling constraints, managing the evolutionary potential of the Present.
  • The change practice would be informed by Natural science – what we have learned from observing Nature in action: Messy coherence, Homeostasis, Natural Resilience, Mutating containers, Exaptation, Biomimicry.
  • The change practice would leverage real world Complexity phenomena: Emergence, Diversity, Viral Butterfly Effect, Non-linear Tipping Point, Self-organization, Stigmergy, Pareto Power Law Risk (fat tail).
  • The change practice would recognize people are Homo Narrans: Dialogic sense-making, Distributed ethnography, narrative fragments, Thick Data, Disintermediation.
  • The change practice would understand the concept of Homo Faber – use of tools to shape a complex environment: Distributed cognition, Chaordic teaming, Safe-to-fail experiments, Weak signal detection, Obliquity, Asymmetric co-evolution, Scaffolding, Nudging, Fractal management.
  • The change practice would recognize humans like to play creative games (Homo Luden): Pattern recognition, Strange attractors, Non-hypothesis abduction, Wicked problems, Serendipity.
  • The change practice would be pragmatic: Conceptual blending, Adjacent Possible, Satisficing, Heuristics, Phronesis, Praxis.
As many of you know, what I outlined was the complexity-based approach to implement change during unpredictable, constantly changing times.
As Dave Snowden explained, you can view the real world in terms of 3 basic systems: Order, Complex, Chaotic. The 20th century was dominated by Order system thinking. Many change practices are  designed for a work environment that is stable, consistent, and where cause & effect relationships exist. The future is deemed predictable and possibly extends from past history. The popular image of jigsaw puzzle parts being put together is apropos. If a change project fits in this environment, one can confidently carry on using a linear step-by-step command and control mindset.
In a complex system the puzzle parts are constantly moving or even missing. Furthermore, a complex adaptive system will see humans adapting by evolving relationships and adjusting emotional interactions. If your change project faces uncertainty, unpredictability, ambiguity, think twice about using Order system CM tools. They really aren’t built for uncontrollable turbulence and volatility.
In 2000, Stephen Hawking stated the new century is the Age of Complexity. It’s getting close to two decades. The time is ripe, perhaps overdue, to update the Future of Change Management.

The Future of Safety

Today I had the privilege and pleasure of speaking at the BCCGA AGM.  A copy of the slides presented can be downloaded here. In my conclusion I posed 4 questions for the BCCGA and its member organizations to consider.

1. What paradigm(s) should our safety vision be based upon?

The evolution of safety thinking can be viewed through 4 Ages.

The recurring theme is about how Humans were treated as new technologies were implemented into business practices. It’s logical that the changes in safety thinking mirror the evolution of Business Practices. The Ages of Technology, Human Factors, and Safety Management are rooted in an Engineering paradigm.

It’s Systems Thinking with distinct parts: People, Process, Technology. Treat them separately and then put them together to deliver a Strategy.  Reductionism works well when the system is stable, consistent, and relatively fixed by constraints imposed by humans (e.g., regulations, policies, standards, rules). However, in addition to ORDER, there are 2 other systems: COMPLEX and CHAOTIC in the real world. These two are constantly changing so a reductionistic approach is not appropriate. One must work holistically with an Ecological paradigm.

This diagram from the Cynefin Centre shows the relative sizing of the 3 systems. Complexity by far is the largest and continues to grow.  All organizations are complex adaptive system. A worthy safety vision must include the Age of Cognitive Complexity and view Safety as an emergent property of a complex adaptive system. The different thinking means rules don’t create safety but create the conditions that enable safety to emerge. Now we can understand why piling on more and more rules can lead to cognitive overload in workers and enable danger, not safety, to emerge.

2. How should we treat workers – as problems to be managed or solutions to be harnessed?

The Age of Technology and Age of Human Factors treated workers as problems – as cogs in a machine and as hazards to be controlled. The Age of Safety Management view recognizes that rules cannot cover every situation. Variability isn’t a threat but a necessity. We need to trust that humans always try to do what they think is right in the situation. The Age of Cognitive Complexity appreciates that humans think differently than logical information-processing machines in an Engineering paradigm. Humans are not rational thinkers; decisions are based on emotional reactions & heuristic shortcuts. As storytellers, people can articulate thick data that a typical report is unable to provide.  As solution providers, workers can call upon tacit knowledge – difficult to transfer to another person by means of writing it down or verbalizing it. Workers who feel like cogs or hazards tend to stay within themselves for fear of punishment. Knowledge is volunteered; never conscripted.

3. What safety heuristics can we share?

While Best Practices manuals are beneficial,  heuristics are on a  bigger stage when dealing with decisions. Humans make 95% of their decisions using heuristics. Heuristics are mental shortcuts to help people make quick, satisfactory but not perfect decisions.

They are the rules of thumb that Masters pass on to their Apprentices. Organizations ought to have a means to collect Safety-II success stories and use pattern recognition tools. Heuristics that emerge can be distributed to Masters for accuracy scrutiny.

4. How can we get more safety stories like these, fewer stories like those?

This question pertains to a new way of shaping a safety vision through the use of narratives (stories, pictures, voice recordings, drawings, sketches, etc.)

Narratives are converted into data points to generate a 2D contour map or fitness landscape
Each dot is a story and seen together they form patterns. The map shows the general direction we want to head – top right corner (High compliance with rules & High level of getting the job done). Clearly we want more safety stories in the Green area.  We also want fewer in the Red and Brown areas. Here’s the rub: If we try to go directly for the top right corner, we won’t get there.  This is ATTITUDE mapping at a level way deeper than observable BEHAVIOUR. Instead we head for an Adjacent Possible.
We get people to tell more stories here, fewer there  by changing a human constraint. It might be loosening a controlling constraint like a rule or practice. It could also be introducing an enabling constraint like a new tool or process.
We gather more stories and monitor how the clusters are changing in real-time. The evolving landscape maps a new Present state – a new starting point. We then change another constraint. Since we can’t predict outcomes both positive and unintended negative consequences might emerge. We accelerate the positives and dampen the negatives. In essence we co-evolve our way to the top right corner of the map. This is how we shape our Safety Culture.

7 Implications of Complexity for Safety

One of my favourite articles is The Complexity of Failure written by Sidney Dekker, Paul Cilliers, and Jan-Hendrik Hofmeyr.  In this posting I’d like to shed more light on the contributions of Paul Cillliers.

Professor Cilliers was a pioneering thinker on complexity working across both the humanities and the sciences. In 1998 he published Complexity and Postmodernism: Understanding Complex Systems which offered implications of complexity theory for our understanding of biological and social systems. Sadly he suddenly passed away in 2011 at the much too early age of 55 due to a massive brain hemorrhage.

My spark for writing comes from a blog recently penned by a complexity colleague Sonja Bilgnaut.  I am following her spade work by exploring  the implications of complexity for safety. Cilliers’ original text is in italics.

  1. Since the nature of a complex organization is determined by the interaction between its members, relationships are fundamental. This does not mean that everybody must be nice to each other; on the contrary. For example, for self-organization to take place, some form of competition is a requirement (Cilliers, 1998: 94-5). The point is merely that things happen during interaction, not in isolation.
  • Because humans are natural storytellers, stories are a widely used  interaction between fellow workers, supervisors, management, and executives. We need to pay attention to  stories told about daily experiences since they provide a strong signal of the present safety culture.
  • We should devote less time trying to change people and their behaviour and more time building relationships.  Despite what psychometric profiling offers, humans are too emotional and unpredictable to accurately figure out. In my case, I am not a trained psychologist so my dabbling trying to change how people tick might be dangerous, on the edge of practising pseudoscience.  I prefer to stay with the natural sciences (viz., physics, biology), the understanding of phenomena in Nature which have evolved over thousands of years.
  • If two workers are in conflict, don’t demand that they both smarten up. Instead, change the nature of relationship so that their interactions are different or even extinguished. Simple examples are changing the task or moving one to another crew.
  • Interactions go beyond people. Non-human agents include machines, ideas (rules, policies, regs) and events (meeting, incident). A worker following a safety rule can create a condition to enable safety to emerge. Too many safety rules can overwhelm and frustrate a worker enabling danger to emerge.

2. Complex organizations are open systems. This means that a great deal of energy and information flows through them, and that a stable state is not desirable.

  • A company’s safety management system (SMS) is a closed system.  In the idealistic SMS world,  stability, certainty, and predictability are the norms. If a deviation occurs, it needs to be controlled and managed. Within the fixed boundaries, we apply reductionistic thinking and place information into a number of safety categories, typically ranging from 4 to 10. An organizational metaphor is sorting solid LEGO bricks under different labels.
    In an open system, it’s different. Think of boundary-less fog and irreducible mayonnaise. If you outsource to a contractor or partner with an external supplier, how open is your SMS? Will you insist on their compliance or draw borders between firms? Do their SMS safety categories blend with yours?
  • All organisations are complex adaptive systems. Adaptation means not lagging behind and plunging into chaotic fire-fighting. It means looking ahead and not only trying to avoid things going wrong, but also trying to ensure that they go right. In the field, workers when confronted by unexpected varying conditions will adjust/adapt their performance to enable success (and safety) to emerge.
  • When field adjustments occasionally fail, it results in a new learning to be shared as a story. This is also why a stable state is not desirable. In a stable state, very little learning is necessary. You just repeat doing what you know.

3. Being open more importantly also means that the boundaries of the organization are not clearly defined. Statements of “mission” and “vision” are often attempts to define the borders, and may work to the detriment of the organization if taken too literally. A vital organization interacts with the environment and other organizations. This may (or may not) lead to big changes in the way the organization understands itself. In short, no organization can be understood independently of its context.

  • Mission and Vision statements are helpful in setting direction. A vector, North Arrow, if you like. They become detrimental if communicated as some idealistic future end state the organization must achieve.
  • Being open is different than “thinking out of the box” because there really is no box to start with. It’s a contextual connection of relationships with other organizations. It’s also a foggy because some organizations are hidden. You can impact organizations that you don’t even know  about and conversely, their unbeknownst actions can constrain you.
    The smart play is to be mindful by staying focused on the Present and monitor desirable and undesirable outcomes as they emerge.

4. Along with the context, the history of an organization co-determines its nature. Two similar-looking organizations with different histories are not the same. Such histories do not consist of the recounting of a number of specific, significant events. The history of an organization is contained in all the individual little interactions that take place all the time, distributed throughout the system.

  • Don’t think about creating a new safety mission or vision by starting with a blank page, a clean sheet, a greenfield.  The organization has history that cannot be erased. The Past should be honoured, not forgotten.
  • Conduct an ongoing challenge of best practices and Life-saving rules. Remember the historical reasons why these were first installed. Then question if these reasons remain valid.
  • Be aware of the part History plays when rolling out a safety initiative across an organization.
    • If it’s something that everyone genuinely agrees to and wants, then just clone & replicate. Aggregation is the corollary of reductionism and it is the common approach to both scaling and integration. Liken it to putting things into org boxes and then fitting them together like a jigsaw. The whole is equal to the sum of its parts.
    • But what if the initiative is controversial? Concerns are voiced, pushback is felt, resistance is real. Then we’re facing complexity where the properties of the safety system as a whole is not the sum of the parts but are unique to the system as a whole.
      If we want to scale capabilities we can’t just add them together. We need to pay attention to history and understand reactions like “It won’t work here”, “We tried that before”, “Oh no! Not again!”
      The change method is not to clone & replicate.  Start by honouring local context. Then decompose into stories to make sense of the culture. Discover what attracts people to do what they do. Recombine to create a mutually coherent solution.

5. Unpredictable and novel characteristics may emerge from an organization. These may or may not be desirable, but they are not by definition an indication of malfunctioning. For example, a totally unexpected loss of interest in a well-established product may emerge. Management may not understand what caused it, but it should not be surprising that such things are possible. Novel features can, on the other hand, be extremely beneficial. They should not be suppressed because they were not anticipated.

  • In the world of safety, failures are unpredictable and undesirable. They emerge when a hidden tipping point is reached.
    As part of an Emergency Preparedness plan, recovery crews with well-defined roles are designated. Their job is to fix the system as quickly as possible and safely restore it to its previous stable state.
  • Serendipity is an unintended but highly desirable consequence. This implies an organization should have an Opportunity crew ready to activate. Their job is to explore the safety opportunity, discover new patterns which may lead to a new solution, and exploit their benefits.
    At a tactical level, the new solution may be a better way of achieving the Mission and Vision. In the same direction but a different path or route.
    At a strategic level, the huge implication is that new opportunity may lead to a better future state than the existing carefully crafted, well-intentioned one. Decision-makers are faced with a dilemma: do we stay the course or will we adapt and change our vector?
  • Avoid introducing novel safety initiatives as big events kicked off with a major announcement. These tend to breed cynicism especially if the company history includes past blemished efforts. Novelty means you honestly don’t know what the outcomes will be since it will be a new experience to those you know (identified stakeholders) and those you don’t know in the foggy network.
    Launch as a small experiment.
    If desirable consequences are observed, accelerate the impact by widening the scope.
    If unintended negative consequences emerge, quickly dampen the impact or even shut it down.
    As noted in (2), constructively de-stabilize the system in order to learn.

6. Because of the nonlinearity of the interactions, small causes can have large effects. The reverse is, of course, also true. The point is that the magnitude of the outcome is not only determined by the size of the cause, but also by the context and by the history of the system. This is another way of saying that we should be prepared for the unexpected. It also implies that we have to be very careful. Something we may think to be insignificant (a casual remark, a joke, a tone of voice) may change everything. Conversely, the grand five-year plan, the result of huge effort, may retrospectively turn out to be meaningless. This is not an argument against proper planning; we have to plan. The point is just that we cannot predict the outcome of a certain cause with absolute clarity.

  • The Butterfly effect is a phenomenon of a complex adaptive system. I’m sure many blog writers like myself are hoping that our safetydifferently cause will go viral, “cross the chasm”, and be adopted by the majority. Sonja in her blog refers to a small rudder that determines the direction of even the largest ship. Perhaps that’s what we are: trimtabs!
  • On the negative side, think of a time when an elected official or CEO made a casual remark about a safety disaster only to have it go viral and backfire. In 2010 Deep Horizon disaster then CEO Tony Hayward called the amount of oil and dispersant “relatively tiny” in comparison with the “very big ocean”.  Hayward’s involvement has left him a highly controversial public figure.
  • Question: Could a long-term safety plan to progress through the linear stages of a Safety Culture Maturity model be a candidate as a meaningless five-year plan?
    If a company conducts an employee early retirement or buy-out program, does it regress and fall down a stage or two?
    If a company deploys external contractors with high turnover, does it ever get off the bottom rung?
    Instead of a linear progression model, stay in the Present and listen to the stories internal and external workers are telling. With the safety Vision in mind, ask what can we do to hear more stories like these, fewer stories like those.
    As the stories change, so will the safety culture.  Proper planning is launching small experiments to shape the culture.

7. Complex organizations cannot thrive when there is too much central control. This certainly does not imply that there should be no control, but rather that control should be distributed throughout the system. One should not go overboard with the notions of self-organization and distributed control. This can be an excuse not to accept the responsibility for decisions when firm decisions are demanded by the context. A good example here is the fact that managers are often keen to “distribute” the responsibility when there are unpopular decisions to be made—like retrenchments—but keen to centralize decisions when they are popular.

  • I’ve noticed safety professionals are frequent candidates for organization pendulum swings. One day you’re in Corporate Safety. Then an accident occurs and in the ensuing investigation a recommendation is made to move you into the field to be closer to the action. Later a new Director of Safety is appointed and she chooses to centralize Safety.
    Pendulum swings are what Robert Fritz calls Corporate Tides, the natural ebb and flow of org structure evolution.
  • Central v distributed control changes are more about governance/audit rather than workflow purposes. No matter what control mechanism is in vogue, it should enable stigmergic behaviour, the natural forming of network clusters to share knowledge, processes, and practices.
  • In a complex adaptive system, each worker is an autonomous decision-maker, a solution not a problem. Decisions made are based on information at hand (aka tacit knowledge) and if not available, knowing who, where, how to access it. Every worker has a knowledge cluster in the network. A safety professional positioned in the field can mean quicker access but more importantly, stronger in-person interactions. This doesn’t discount a person in Head Office who has a trusting relationship from being a “go to” guy. Today’s video conferencing tools can place the Corp Safety person virtually on site in a matter of minutes.
Thanks, Sonja. Thanks, Paul.
Note: If you have any comments, I would appreciate if you would post them at safetydifferently.com.

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSaveSaveSave

SaveSaveSaveSave

SaveSave

Safety Differently

My thanks to Peter Caulfield for interviewing me and writing an article in the Journal of Commerce on a different view of safety.


Veteran Vancouver engineer and consultant Gary Wong says the safety industry needs to reexamine its goals and how to accomplish them if it wants to keep workers safe and at the same time make them productive.

Wong’s approach, called Safety Differently, is based on what he says is a more realistic take on what goes on in the workplace.

“Industry standards and practices typically evolve based on what we learn from failures,” says Wong. “But evolution in the safety industry has been slow and continues to follow the old idea that safety is only the absence of people getting hurt.”

That approach, says Wong, is based on the belief that humans must be controlled with compliance rules and procedures.

“If an accident occurs, we automatically look for the people to blame and then punish them through discipline or termination,” Wong says. “Experts today promote the idealistic goal of zero harm, so it isn’t surprising workers are confused if a safety dilemma arises.”

Safety Differently on the other hand credits workers for getting things right, which he says they do most of the time.

“Safety Differently sees people as the solution and safety as an ethical responsibility,” says Wong. “It recognizes that safety is not something that is created, but emerges out of a complex adaptive system.”

When facing an unexpected change, people will adjust their actions accordingly, he says. In most cases, their adjustment will keep them stay safe.

But an unexpected change can also be dangerous, and, if a tipping point is reached, an incident can happen.

“Safety Differently focuses on hidden non-linear tipping point signals and how humans sense impending danger,” Wong says.

“It boosts the capacity of people to handle their activities safely and successfully under different conditions.”

Ron Gantt, vice-president of SCM Safety Inc. in San Ramon, Calif., says there is a big difference between Safety Differently and the old way of doing safety.

“The old safety model focuses on regulations and takes an adversarial approach,” Gantt says. “Safety Differently, on the other hand, is more collaborative, with more worker participation in finding solutions that prevent accidents.”

Safety Differently is based on three principles, Gantt says.

“First, it is a forward-looking, predictive tool,” he says.

“It looks ahead to prevent accidents in the future, not backward at accidents that happened in the past. Its purpose is to build the capacity to be successful from now on and as conditions change.”

Safety Differently’s second operating principle is that people are the solution, not the problem.

“People are instinctive risk managers and they have an innate ability for creative problem-solving,” Gantt says. “Let’s trust them to do the right thing. Unfortunately, there’s not a lot of trust in the old safety model.”

Third, the people at the top of an organization should view safety as an ethical responsibility.

“They need to be curious about what their employees want and make an effort to satisfy them,” Gantt says.
Safety Differently is needed, he says, because the world is becoming more interdependent and complex and small changes can have huge effects.

Support for Safety Differently is growing, he adds.

“Many safety professionals are frustrated with the old way of doing things,” Gantt says.

At the same time, there is resistance from people and groups with a vested interest in maintaining the status quo.

“They are likely to say that the way to reduce the number of workplace injuries and deaths is to keep things the old way but to try harder,” he says.

Erik Hollnagel, a Danish academic and expert in system safety and human reliability analysis, advocates the application of “synesis to safety.” The term means the same thing as synthesis, or bringing together.

“The effort to ensure that work goes well and that the number of acceptable outcomes is as high as possible requires a unification of priorities, perspectives and practices,” says Hollnagel.

“Synesis brings together all these practices to produce outcomes that satisfy more than one priority and even reconciles multiple priorities.”

Many sectors of the economy conflate safety and quality or safety and productivity, Hollnagel says.

“We can look at a process or work situation from a safety point of view, from a quality point of view or from a productivity point of view,” Hollnagel says.

“But we should keep in mind that any individual point of view reveals only part of what is going on and that it is necessary to understand what is going on as a whole.”

Using Cynefin to publish a book

It’s been some time since I last blogged on my website. It’s not because I’ve grown tired of complexity and safety; it’s mainly due to my  involvement with friends to publish a book about an amazing man who dedicated over 50 years on the University of British Columbia campus. The target was achieved: The Age of Walter Gage: How One Canadian Shaped the Lives of Thousands.  This particular blog is not about the book  but  how Cynefin  dynamics  & cadence was put to good use.

When the book idea took hold in early 2016, it wasn’t a surprise that we started in the Cynefin Complicated domain. We certainly did not qualify as experts in producing a book but as “expert” engineers schooled in systems thinking, we all had a propensity to set a desired future state target and build a project plan by linearly working backwards. We at least were cognizant we needed the right set of talent and skills – writing, photo compilation , book editing, publication, distribution. The first milestone on the roadmap was a book publishing firm that would assume these activities in their entirety. Then we could manage the project in the Complicated domain using a “waterfall” approach.

I volunteered to build a companion website (open network platform) to collect stories (narrative research). My blogging efforts would focus on engaging storytellers and spreading the news about our Walter Gage book project. We literally had no clue who had stories and how many there were. All that we knew was that time was not on our side so there was an urgency to contact storytellers before life took its natural toll.

The prompt question for stories was simple: “a personal or professional experience that sheds light on how Walter Gage impacted you.” While written stories were requested, we did receive other narrative fragments – a voice recording, photos and letters.

Could I have signified the stories with triads and dyads to later search for patterns? Yes, but  it would have required team education and, of course, more work (probably unappreciated) by storytellers. Instead, we chose to rely on the hired author’s vast experience to read the stories and extract themes worth highlighting in the book.

While I was busy gathering stories and narrative fragments, other team members were approaching several publishers with our book idea.  While we were told our pitch was for a noble cause and commendable, nobody signed on.  We learned that our  “business case” did not provide sufficient ROI as a money-making opportunity.

Drat. Our path was broken. The roadmap led us to a dead end. Being resilient, we shifted into the above diagram’s “Yellow loop” to reset our thinking.  We decided to deploy a self-publishing strategy and search for resources who could help us make our book a reality. It also meant more work on our part.  It was intriguing to observe the team’s need to “self-organize.” We were divided into 2 sub teams- Book Creation and Marketing. Was there a concern for the silo effect? Yes, but like physical silos on a farm which are ventilated, we continued to meet often as an overall team to enable venting to take place.

Due to our lack of knowledge and practical experience, I knew our cadence would be between Cynefin Complicated and Complex domains (the “Blue loop”). Whenever a totally unexpected unintended consequence emerged, we would move into the Complex domain. With the Engineer’s disposition to immediately “fix” a problem, patience was necessary to make sense of outcomes and explore options. BTW, not all consequences were negative. One UBC grad came forth and surprised us with a major donation. Serendipity at its finest!

I introduced different software tools to the team. Some worked, some didn’t. I opened a Trello board to track our progress under the 2 sub teams. It was great for storing documents and having them available at a meeting with a couple of clicks. However,  I ran into objections regarding too many email notifications being received. I also learned that not all team members wanted the full picture, just happy to do their tasks. I eventually deleted half the team with the balance remaining on the app to stay abreast. Chalk it up as a safe-to-fail experiment.

Our primary online communication mode was Email, with all its pros and cons. “Reply to All” messages became problematic. One time we had a thread with over 72 responses. Talk about being on the Obvious/Chaotic boundary with a failure looking for a place to happen! Attachments were easily lost in the long threads. Fortunately with Trello I was able to access quickly and send them to members, as a separate new email of course.

“Email tag” had me thinking of introducing Slack to simplify communication but my Trello discovery led me to a “Don’t even think about it” conclusion. When navigating complexity, we can’t control human behaviour but can only influence the relationships and interactions amongst team members. In this case, I chose not to drop in Slack as a catalyst which would have certainly disrupted communication patterns but, who knows, maybe enable worse patterns to emerge.

We held two “by invitation only” project celebration events.  Planning was autonomic: Let’s issue invitations via email. After all, if you’re good with a hammer, everything looks like a nail. Hmm, if there’s a “best practice” in the Obvious domain, email tops the list.

Thankfully I was able to influence the team to go with Evite.com. Its messaging features enabled us to leverage feedback loops, a key phenomenon of complex systems. One attendee even went a step further by posting photos of the event on evite.com for everyone to enjoy. (Note to self: Use evite.com to manage the next class reunion instead of personal email account.)

We have our official book launch tomorrow, Feb 15th.  The beginning of the end. Or perhaps the end of the beginning since book promotion and marketing now ramps up. Either way, I plan to invest more time pushing the boundaries on complexity and safety, from a natural sciences perspective.

 

 

 

 

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

SaveSave

Why a Complexity-based Safety Audit makes sense

Imagine you work in a company with a good safety record. By “good”,  you are in the upper quartile as per the benchmarking stats in your industry.  Things were rolling along nicely until this past year. There was a steep increase in failures which has led to concerns over the safety culture.Universe made of stories

Historically there have 2 safety-related events but last year there were 10. Accident investigation reports show it’s not one category but several: Bodily reaction and exertion, Contact with equipment, Misuse of hazardous materials, Falls and falling objects. Fortunately there were no fatalities; most were classified as medical aids but one resulted in a serious injury.   Three medical aid injuries were from contacting moving equipment, two were related to improper tool and glove use. The serious injury was due to a worker falling off a ladder.

What upsets you is that the pre-job briefing did not identify the correct glove or the proper use of hazardous materials. You also read the near-miss incidents and heard disturbing rumours from the grapevine that a some recent close calls went unreported. Something needs to be done but what should you do?

One option is to do a safety audit. It will be highly visible and show executives and workers you mean business.  Phase 1 will consist of conducting an assessment and developing action plans to close any performance gaps. The gaps typically concentrate on strengthening safety robustness – how well practices follow safety policies, systems, standards, regulations, rules to avoid known failures. Phase 2 will implement the action plans to ensure that actions are being completed with quality and in a timely manner. A survey will gauge worker response. The safety audit project will end with a report that details the completion of the actions and observations on how the organization has responded to the implementation of the plan.

For optics reasons, you are considering hiring an external consultant with safety expertise. This expert ideally would know what to look for and through interviews and field observations pinpoint root causes. Action plans will be formulated to close the gaps.  If done carefully, no blame will be attached anybody. To ensure no one or group is singled out, any subsequent compliance training and testing will be given to all employees. Assuming all goes well, you can turn the page, close the chapter, and march on assuming all is well.  Or is it?

You hesitate because you’ve experienced safety audits before. Yes, there are short-term improvements (Hawthorne effect?) but eventually you noticed that people drifted back to old habits and patterns. Failure (personal injury or damage to equipment, tools, facilities) didn’t happen until years later, well after all the audit hubbub had dissipated. A bit of “what-iffing” is making you pause about going down the safety audit path again:

  1. What if the external safety consultants are trapped by their expertise because they already believe they have the solution and see  the job as implementing their solution and making it work? That is, what if they are great at using a hammer and therefore see everything as a nail, including a screw?
  2. What if the safety audit is built around a position that is the consultant’s ideal future state but not ours?
  3. What if the survey questionnaire is designed to validate what the safety consultants have seen in the past?
  4. What if front-line workers are reluctant to answer questions during interviews for feelings of being put on trial, fear of being blamed, or worse, subjects in a perceived witch hunt?
  5. What if safety personnel,  supervisors, managers, executives are reluctant to answer questions during interviews or complete survey questionnaires for the fear of being held accountable for failures under their watch?
  6. What if employees feel it’s very unsettling to have someone looking their shoulders recording field observations? What if the union complains because it’s deemed a regression to the Scientific Management era (viz., Charlie Chaplin’s movie ‘Modern Times’)?
  7. What if the performance gaps identified are measured against Safety Management System (SMS) outcomes that are difficult to quantify (e.g., All personnel must report near-miss incidents at all times)?
  8. What if we develop an action plan and during implementation realize the assumptions made about the future are wrong?
  9. What if during implementation a better solution emerges than the one recommended?
  10. What if the expenditure on a safety audit just reinforces what we know and nothing new is learned?

Are there other options besides a traditional safety audit? Yes, there is. And it’s different.

A sense-making approach boosts the capacity of people and organizations to handle their activities successfully, under varying conditions. It recognizes the real world is replete with safety paradoxes and dilemmas that workers must struggle with on a daily basis. The proven methods are pragmatic and make sense of complexity in safety in order to act. The stories gathered from the workforce including contractors often go beyond safety robustness (preventing failure) and provide insights into the company’s level of safety resilience. Resilience is the ability to quickly recover after a failure, speedily implement an unanticipated opportunity arising from an event, and respond early to an alert that a major catastrophe might be looming over the horizon.

The paradigm is not as an expert with deep knowledge of best practices in safety but as an anthropologist informed by the historical evolution of safety practices. The Santa Fe Institute noted companies operate in industries which are complex adaptive systems (CAS). Safety is not a product nor a service; it is an emergent property of a complex adaptive system. For instance, safety rules enable safety to emerge but too many rules can overwhelm workers and create confusion. If a tipping point is reached, danger emerges in the form of workers doing workarounds or deliberately ignoring rules to get work done.

Anthropologists believe culture answers can best be found by engaging the total workforce. The sense-making consultant’s role is to understand the decisions people have made. Elevating behaviour similarities and differences can highlight what forces are at play that influence people to choose to stay within compliance boundaries or take calculated risks.

By applying complexity-based thinking, here’s how  the what-if concerns listed above are addressed.

  1. Escape expertise entrapment.
    There are no preconceived notions or solutions. As ethnographers, observations that  describe the safety culture are recorded. Stories are easy to capture since people are born storytellers. Stories add context, can describe complex situations, and emotionally engage humans.
  2. Be mindful.
    You can only act to change the Present. Therefore, attention is placed on the current situation and not some ideal future state that may or may not materialize.
  3. Stay clear of cognitive dissonance.
    This leads to the confirmation bias — the often unconscious act of referencing only those perspectives that fuel pre-existing views.
    There are no survey questionnaires. Questions asked are simple prompts to help workers get started in sharing their stories. Stories are very effective in capturing decisions people must make dealing with unexpected varying conditions such as conflicting safety rules, lack of proper equipment, tension amongst safety, productivity, and legal compliance.
  4. Avoid confrontation.
    Front-line workers are not required to answer audit questions. They have the trust and freedom to tell any story they wish. It’s what matters most to them, not what a safety expert thinks is important and needs to interrogate.
  5. Treat everyone the same.
    Safety personnel,  supervisors, managers, executives also get to tell their stories. Their behaviours and interactions play a huge role in shaping the safety culture.  There is no “Them versus Us”; it’s anyone and everyone in the CAS.
  6. Make it easy and comfortable.
    There is minimal uneasiness with recording field observations since workers choose the topics.  A story with video might be showing what goes wrong or what goes right.  If union agents are present, they are welcomed to tell their safety stories and add diversity to  the narrative mosaic.
  7. Be guided by the compass, not the clock.
    Performance improvement is achieved by focusing on direction, not targeted SMS outcomes. This avoids the dilemma of workers through their stories identifying SMS as a problem.  Direction comes from asking: “Where do we want fewer stories like these and more stories like that?” The effectiveness of an performance improvement intervention is measured by the shift in subsequent stories told.
  8. Choose safe-to-fail over fail-safe.
    Avoid the time and effort developing a robust fail-safe action plan and then weakening it with CYA assumptions. When dealing with uncertainty and ambiguity, probe the CAS with  safe-to-fail experiments. This is the essence behind Nudge theory, introducing small interventions to influence behaviour changes.
  9. Sail, not rail.
    Think of navigating a ship on a uncontrollable sea of complexity from driving a train on a controllable track of certainty. Deviation manoeuvres like tacking and jibing  are expected. By designing actions to be small, emergence of surprising consequences can be better handled. Positive serendipitous opportunities heading in the desired direction can be immediately seized. On the other hand,  negative  consequences are quickly dampened.
  10. Focus on what you don’t know.
    A sense-making approach opens the individual’s and thus the company’s mindset to Knowledge (known knowns) as well as Ignorance (unknowns, unknowables).  New learning comes from exploring Ignorance. By sensing different behaviour patterns that emerge from the nudges, it becomes clearer why people behave the way they do. This discovery may lead to new ways to strengthen safety robustness + build safety resilience. This is managing the evolutionary potential of the Present, one small step at a time.

If you’re tired of doing same old, same old, then it’s time to conduct an “audit” on your safety audit approach and choose to do safety differently. Click here for more thoughts on safety audits.