ERM is a broad church. Currently, it means different things to different people, depending on experience and discipline. How far can the term be pushed before it loses meaning?
In a recent chat thread, a US central government agency’s head of risk appealed for “an ERM system evaluation checklist” to be used “to compare features and functionality”. This is certainly not the first time I have been asked such a question (although, in this instance, I was asked along with 3,992 other members of the LinkedIn group).
There are plenty of these database-driven systems on the market; all do roughly the same job. The principal differentiators are (i) look and feel, (ii) ease of integration of quantitative analysis (do you want to be taken seriously?) and (iii) flexibility of reporting.
Selecting one of these systems under explicit instruction is one thing; under these circumstances it is a compliance solution. Doing it as a conscious and deliberate choice to further enterprise risk management is quite another. No-one should be under any illusions: that a compliance solution is precisely what you will achieve, despite the best will in the world. Simply put, there is no such thing as an ‘ERM system’ in the technology sense. There are plenty of software vendors who, for perfectly valid commercial reasons, take it upon themselves to badge their risk/compliance databases as such. And there are plenty of well-intentioned if unsuspecting buyers. But to implement an effective ERM system, start somewhere else. Compliance is fine but it is not ERM and never will be.
In the case of the US agency, the risk director reported having undertaken an extensive study and the organisation had developed a 5 year roadmap for risk, no doubt full of aspirational statements. The agency had probably paid external advisors to assist them with this and, as a result, there will be considerable expectation among their senior executives that risk will be better managed and control will be more effective as a result of the implementation and following the roadmap. If only they can select the right tool for the job!
Yet, here was the risk director of the agency, in a LinkedIn chat thread, trying to elicit the best way to select what they believe to be the most important enabler of the roadmap they have defined. It strikes me that he was probably experiencing the first, dawning moments of realisation that the articifice the agency has assembled can only lead, ultimately, to disappointment. Like so many others before them, they will have experienced a few “but can it work, really?” moments. Sadly, the answer, provided by innumerable examples of the experience of others is: “No, it cannot and it will not.” Or, more accurately, yes, it will work right up to the moment that it doesn’t. And when it doesn’t, it will be spectacular.
Simply put, attempting a simplistic solution to problems of irreducible uncertainty and complexity is ultimately ineffective and thus (almost) pointless. No risk database, no matter how well specified and developed, can provide the perspectives, mind-sets, information architecture, capabilities and competencies required in an organisation to address its risk challenges. Risk databases are the right answer to the wrong question.
If the objective is to implement a tool to collate all the risks that an organisation already knows about, then a risk database is the right place to start. Of course, this raises the question of why one would ever want to do such a thing. However, if the objective is to address risk and uncertainty and their implications for structure and metastructure of control in the organisation and to develop approaches and techniques that assist executives to manage the risks within their areas of authority, the starting point is very different. It will not be what people already know that will enhance anticipation of and resilience to risk; it will be on what they do not know and how to deal with the lack of knowledge, with the uncertainty or with the lack of understanding of complex and ambiguous operating conditions or the results thereof.
If an approach based around populating a risk database is chosen, senior people will push back – as the risk director reports that they have in the US agency case – because they see another compliance activity heading their way that will do nothing to help them with their rather confusing day jobs. They see a whole load of workshops designed specifically to elicit from them what they already know – using their watch to tell them the time. Donald Rumsfeld got it right: there are known unknowns and unknown unknowns. Of course, his department’s approach to that problem at that time (2002) was scarcely exemplary. Quite the contrary.
The difficulty is that the risk database approach and its attendant risk elicitation workshops do not address the important types of uncertainty or, if they do, they do so tangentially and partially. What we need is an approach that marches head-long in to the bewildering and considerably more useful world of addressing the nature and implications of behaviours, uncertainties and complexities in the organisation’s strategic, operating and control environments. No easy task.
Innumerable organisations are confronting this problem. There also appears to be a growing recognition in government that risk is not being addressed satisfactorily in government agencies by the current orthodoxy. Endless revelations by witnesses appearing before US House and Senate sub-committees and UK parliamentary select committees relating to government agencies’ (and, increasingly, private firms’) crises and failures suggest we have not achieved the insight on risk that the proponents of (what I will call here) the ‘orthodox approach’ to management of risk – workshops, registers, matrices – suggest we ought to have realised. We need thorough – even forensic – analysis of whether the approaches based on compiling lists of known risks is an effective approach to management of risk; frankly, sustained utility seems improbable.
Instead, we require an approach – or, more realistically, a set of approaches – that recognize the ‘non-linear’ nature of risk. Much of the time, firms, agencies and other organisations work in a zone of relative stability punctuated by periods or cycles of greater or lesser volatility. In these periods, the risks that manifest are known about and, to a greater or lesser extent, understood. Such risks are, or can be, adequately captured using the orthodox approach referred to above.
However, these systems are never ‘stable’ per se. Occasionally, the unanticipated results of the unpredictable behaviours and interactions of human beings in the system, or a random external shock or revelation of previously unknown conditions in the operating environment (what Taleb calls a ‘black swan’), can shift such systems in to a phase of instability that behaves very differently; a previously unknown (and probably unknowable) ‘tipping point’ has been passed. What happens hereafter is turbulent and chaotic. The trajectory is not predictable but examination of previous crises provides discernible patterns. In this phase, organisational resilience is crucial; interventions can be decisive in managing the crisis or escalate it immeasurably. The transmission and amplification effects of ubiquitous social and news media mean that corporate intent and external effect can differ diametrically. ‘Control’ in the traditional sense is meaningless.
Any of us can relate this description to a recent failure with which, for whatever reason, we are familiar; in a sense it is a generalised ‘pathology of a crisis’. Until we can (i) understand the different crisis pathologies, (ii) imagine approaches to the management of risk that can provide a measure of anticipation of, and resilience and responsiveness to, rapidly escalating crises and (iii) provide interventions that address the crisis pathology as it really happens, risk management will not meet the (perhaps inflated) expectations of management – and of post-event parliamentary oversight functions.
The starting point is to understand what really happens – by looking at what has already really happened – in practice. We need robust and meaningful review of real risk incidents (of which there are plenty) and the structure and performance of the risk management systems in situ in the host organisations at the time (of which there are almost none outside major hazard and loss-of-life events). Sweeping such crises under the closest carpet – and thereby failing to understand the pathology of the crisis – misses an essential learning opportunity each time it happens. The resulting review need not be too uncomfortable for the host; on the contrary, it provides an ideal opportunity for some organisational honesty and denouement in a low-threat environment; in one sense, the greater the involvement of the host, the better – as long as it does not extend to defensive veto of post-event analysis.
Such an approach would augment enormously our understanding of the ERM systems that matter: not periodic review of databases of known risks, but the messy reality of the operation of organisations’ routines for managing uncertainty, building resilience and identifying and responding to a potentially chaotic operating environment – internally and externally. From such an exercise, far more realistic prescriptions for risk management routines would emerge.
Inability to imagine an alternative is not a good enough reason to stick with an ineffective status quo. There are meaningful alternatives which must be given the opportunity to show their worth. The current situation, where they are crowded out by a convenient but simplistic risk management orthodoxy, serves no-one.