CCRM Fundamentals

CCRM leverages the concepts of High Reliability Organizations to ensure the highest levels of safety and reliability across all critical infrastructure sectors. There are a suite of core fundamental principles of proactive crisis and risk reduction that are generalizable and transferrable across industries. These core fundamentals are targeted towards what we call ‘socio-technical systems.’ The ‘socio’ part acknowledges that people are an integral part of the system, not just the physical or technological portions of systems. People can be a major resource for organizations when it comes to proactive risk reduction. Our approaches embrace and leverage this frequently overlooked resource as a major asset for organizations to proactively reduce risk!

What does the CCRM approach do for an organization?

  • Aids to avoid E3 Errors (Solving the Wrong Problem Precisely)
  • Utilizes assumption audits ensure fundamental assumptions are consistent with actual organizational practices

  • Promotes the “clarity test” to maximize alignment vertically and horizontally throughout an organization with respect to intended safety and reliability outcomes

  • Aims to ensure “Effective Communication” across disciplines/silos through the use of analogies and stories

  • Frames responsible decision metrics consistent with fundamental system/organizational assumptions

  • Configuration of surveillance systems to monitor and minimize skew between “Work as Imagined” vs “Work as Done”

  • Arms organizations to avoid knowable/preventable catastrophic outcomes by leveraging people as risk-reduction resources

Quantitative and Qualitative Risk Reduction Techniques include:

  • Mindfulness – Being aware of organizational drift way from intended safety and reliability
  • Sensemaking – Comprehension in environments with overwhelming data and/or contradictory information
  • Modes of Inquiry (Approaches to develop knowledge)
    • Agreement – Group consensus
    • Analysis – Quantitative analyses
    • Multiple Realities – Viewing a problem from multiple perspectives
    • Dialectic – Opposing sides/viewpoints (similar to court with plaintiff/defendant)
    • Unbounded Systems Thinking (UST) – Totally open, everything and anything is considered
  • Leveraging Uncertainty as a Management Resource

  • High Reliability Organization (HRO) Attributes

What are High Reliability Organizations (HROs)?

HROs have a clear and explicit concept of “reliability” that is shared consistently across the organization

  • reliability encompasses both organizational processes and outputs, which is an integral part of overall organizational safety
  • high reliability is the repetitive, continuous, and safe provision of outputs
  • reliability at the same time involves the identification of ‘precluded events’ which are clearly specified events that, by a wide and active consensus both within and beyond an organization, must never happen
  • The high reliability management process is the systematic, continuous dynamic safe “production” of the intended output(s) while avoiding the ‘precluded events’ over time

HROs have a highly integrated reliability and safety-minded organizational culture (widely held assumptions, attitudes, values, and practices)

  • personnel in HROs internalize a commitment to reliability and safety as part of their identity, regardless of their positions
  • it is assumed that everyone is responsible for safety, responsibility is not off-loaded to those who are designated as safety officers or who work in safety departments. In this way, the safety culture in the organization persists and is “person-proof”. Widely accepted attitudes and practices are not vulnerable to the shifting goals of new CEO’s or the weakness of a particular safety officer
  • it is a widely accepted view that simple regulatory compliance is not a guarantee of safety. If the organization is regulated and operated to “regulatory adequacy” rather than excellence in all levels of performance, then any lapses in performance will be away from “regulatory adequacy” and not excellence.
  • it is accepted as part of the culture of an HRO that there will be fewer lapses away from excellence than there are from “regulatory adequacy” because excellence in a high reliability organization means that there is a constant watchfulness at all levels of the organization for any slips or lapses into precursor conditions, as well as a constant search for possibilities for improving reliability and extending safety margins.
  • HROs exhibits an atmosphere and commitment to excellence and a managerial crispness that can be seen even in areas that would seem to lie outside of immediate production processes — things work in restrooms, food services, lighting, phones, etc. There is minimal backlog in maintenance of even mundane and everyday things.
  • personnel are not punished for speaking up about concerns they have about the safety of conditions and practices. Prompt corrective action processes involve personnel who identify problems, and they may be rewarded for doing so. Corrective action programs are not backlogged to the point that employees feel that filing an action request is “more trouble than its worth.”
  • HROs use decision-making practices that emphasize prudent choices over those that are simply allowable. A proposed action is demonstrated to be safe in order to proceed, rather than a formal burden of proof required of its “unsafety” in order to prevent or stop it.

HROs recognize and utilize “uncertainty” as a formal resource

  • limits of knowledge and uncertainty are acknowledged and characterized in the organization such as parametric (measurement) uncertainty; modelling uncertainty and incompleteness uncertainty (known unknowns)
  • types of uncertainty are differentiated and specified and provide important information in relation to potential errors

HROs routinely use ‘precursor conditions’ as proactive risk-reduction approaches

  • the HRO approach includes analyzing both probable and possible chains of error or failure that can lead to unacceptable failures or accidents, and then analyzing precursor conditions that can lead to errors that could then propagate downstream to these ultimate events.
  • physical conditions (e.g. excessive operating temperatures and pressures, loss of backup equipment, loss of sensor and monitoring inputs) that exceed an established bandwidth of acceptable operating conditions, including operational uncertainty in the ability to assess risk.
  • organizational conditions, e.g. excessive cognitive load that undermines the attention of operators, excessive noise in control rooms, breakdowns in organizational communication, or erosion of inter-departmental cooperation and trust.
  • if precursor conditions of either type occur key personnel (pilots, control operators, supervisors and maintenance foremen) can stop jobs until a situation moves out of a precursor zone. Organizational precursors are less formally delineated than physical ones but here too supervisors or even individual operators may complain about or even halt operations in the face of them.
  • Training and individual mindfulness to quickly recognize precursor zones, and the ability to rapidly respond and restore operations to acceptable bandwidths — are key properties of high reliability organizations

HROs embrace and support “Reliability Professionals”

  • many workers function informally as “reliability professionals” — they may have long experience in many different jobs and departments, and a mix of formal and experiential knowledge. They are always alert to precursor conditions or other potential threats to reliability and safety and will speak up when they observe them. Reliability professionals in this informal role have assumed a scope of attention and responsibility larger than that of their formal job description.
  • the role of “reliability professional”, though informal, is respected and supported by upper management

HROs exhibit a strategic degree of organizational flexibility

  • while HROs are generally formal bureaucratic organizations with a chain-of-command and specialized job descriptions and responsibilities, many work groups are organized as teams in control rooms, maintenance and engineering departments. There is flexibility in non-routine situations for decision responsibilities to “migrate” downward to those with specialized knowledge and those closer to decision implementation.
  • unlike many formal organizations where communication patterns rigidly follow formal chains-of-command and do not cut across specialized departments, communication within an HRO is often distributed across departments and levels of hierarchy. The density of channels is a way of ensuring that information pertaining to reliability and safety flows freely and that important decisions are not made in ignorance of relevant but unknown knowns — information that is, in fact, known elsewhere in the organization.

HROs are constantly on the lookout for errors

  • repetitive, continuous, and safe provision of outputs requires well-understood technologies and workflows embedded in formal design principles, models, procedures (i.e. the “rules of rightness”) for operation, supplemented by experiential knowledge widely distributed throughout the organization.
  • formal procedures are an important guide to all significant task performance
  • Procedures are revised and refined so that they reflect the current and expanding knowledge base of the organization
  • workers “own” the procedures — they are frequent initiators and generally participators in an ongoing formal revision process.  Because of this ongoing process, procedures never have to be intentionally circumvented.
  • a continuous process of improvement in reliability and safety is highly valued and pursued through procedural revisions and refinements, new technologies (though carefully analyzed and tested), careful root cause analysis of incidents and accidents and the subsequent enhancement of analysis applied to precursor conditions.
  • Individuals in HROs recognize and plan for the possibility of mistakes, latent issues, and inherent risk, even while expecting successful outcomes