Separated by a common language?

As if he held the truth somehow to be self-evident, Jim DeLoach of Arthur Andersen, in a landmark paper written in 1995, prescribed the need for a common risk language in any enterprise risk management (ERM) initiative.   Without any apparent fear of contradiction, he stated the need for a common risk language as gospel truth.  I have never seen any evidence presented that this is the case now (or was the case then).  I have still never seen any cogent case mounted to support the assertion.  Yet I have seen the same claim reiterated almost universally.

In my view, far too much time is spent in ERM on the issue of language. There is no need to invent or impose a new language in relation to management of risk or loss or failure; on the contrary, it is downright dangerous to do so. The nearer we stay to dictionary or generally-accepted definitions of terms, the better.  Let’s be careful, as far as possible, to stick to terms as they are understood in general usage.  When in doubt, refer to the OED and be prepared to explain meanings.

The notion that all sectors and disciplines should abandon their existing languages and terminologies in favour of an ISO-imposed approach (or any other approach) is neither realistic nor, if it were, even helpful.  Such attempts at creating ‘corporate newspeak’ are seldom sustainably successful; people simply revert to how they always used terms or the common definitions of terms borrowed for the purpose.

My argument would be that the language is not the issue.  Nor is the problem use of specific terms – appetite, preference, tolerance, limits.  By and large, the meaning of these words qua terms is clear.  As Gareth Morgan has pointed out, they are simply metaphors that are more or less meaningful and resonate with users so persist and fall in to common usage.  Most of the technical arguments around preferences and tolerances are dwarfed (in my experience) by misapprehensions about quantification and what can and cannot be counted or measured reliably.

After all, perhaps the most misused term in the whole field is “risk” itself. 
ISO makes a passing nod at the issue of uncertainty, but the different types of uncertainty or sources of uncertainty, while well articulated in the strategy literature, are almost never referred to in the same documents insisting that a common definition of risk be adopted.  Epistemological issues of uncertainty are unaddressed by ISO yet are so critical as to have created a post-crisis fissure in the field of risk in financial economics.

As Knight pointed out in the 1920s, uncertainty is partially a knowledge problem, but also partly an insoluble problem of subjective estimation of an unknowable future.  These differences are not splitting hairs or purely semantic; they cannot be solved with a common language. They are fundamental problems of limits to the potential efficacy of risk management practice. The are the real boundary issues in risk.

The on-going ISO-led attempt to use “risk” differently from either common usage (ie. possibility of loss) and from previous technical usage (ie. Knightian ‘risk’) is hardly the right starting point for expanding effective risk practice.  Nor is the assumption of the need for a vocabulary that is externally imposed rather than one derived from existing, accepted uses of terms in finance, statistics, economics or other relevant disciplines. You will never persuade an engineer that in relation to risk, the term ‘tolerance’ means something different from the dictionary definition or the common engineering usage.  So why bother?  

I am not denying the importance of accuracy or clarity.  I am merely questioning the utility of arguing that peoples’ existing knowledge of terms needs to be overturned for form.  I just think it is the wrong place to start.

So let’s look at the issue empirically.

Many organisations that have invested extensively in common definitions for risk and from industries where risk comes with clearly understood meanings have failed. I am sure that many that have invested systematically in consistent interpretations have not failed.  Neither set of observations proves or disproves the point.  Most management of risk in most (non-financial sector) firms goes on despite rather than because of formal risk management systems (excluding treasury and risk transfer programmes). The vast majority of strategic decision-making is, currently, not conducted in terms that derive from risk management (of the ISO or any other variety), yet adaptive strategy is clearly one of the key elements of any firm’s management of uncertainty in its strategic environment. Given that more than half of firm failures and material idiosyncratic value losses are attributable to strategic error or mis-step (according to BAH research last decade), this matters.  It is not a matter, typically, of vocabulary.

In financial services, the underlying risk challenge is not one of use of terms; it is the availability and provenance of reliable securities data and structure of data for inter-bank and bank-regulator transmission and analysis.  There is enormously useful work being devoted to this and quite properly. Here ISO (led by Karla McKenna of Citi) has a very important role to play as has SWIFT (in a group in which I used to participate). 

Or it is the application of valuation models pressed way beyond the limits of their original restrictive assumptions or the inferential utility of their underlying data.  Neither of these is a vocabulary problem, at least in the semantic sense.

If the field of debate is, say, nuclear power, there will be an engineering language specifically around that and the physical risks associated with it. But the costs associated with both containment (engineering) and containment failure will be physical (introducing an existing vocabulary around radiation sickness) and financial, for which we already have a vocabulary.

I struggle with the notion that a ‘common language’ is necessary here.  Or, more precisely, a new common language.  My argument is that we already have all the terminologies we need within the different disciplines represented in the firm. Ultimately, to provide “common criteria to allow decision-makers to compare dissimilar risks,” everything has to be reduced to a comparable basis; to a number – cost or loss – or to a probability or confidence interval (however calculated) or both.  Therefore, that number is ultimately financial and the languages are finance and statistical inference or probability. There are already languages for all of these. Why attempt to develop a new one when perfectly acceptable ones already exist?

Everyone brings personal and professional experience of risk to what they do.  The role of the risk manager is not to alter, amend or correct that experience to fit terms.  It is to provide a bridge between peoples’ existing experience and insights and to provide the common criteria to allow decision-makers to compare dissimilar risks.

  And, wherever practicable and useful, to quantify and ensure that the methods used for doing so, and for integrating the results, are analytically robust. Ultimately, the languages of risk are finance and physical harm and the operands are inference, causality and probability – statistics.

What we need is more realism about what we can and should be aiming to achieve in management of risk.  People who have no particular knowledge of epistemological debates or cultural research methodologies or corporate financial management or statistical inference cannot suddenly be expected to be expert therein because of a new job title or a course on an ISO standard.  Nor should risk managers expect others, who may be far better schooled in some of these areas, to conform to their restrictive interpretation of terms that bear on consideration of risk-taking and management of risk and uncertainty.  For example, regardless of how simple they may make life, risk maps using point-based estimates of risk cannot, technically, be meaningful. And yet they are used extensively in corporate risk practice.

We should start with establishing a common basis for thinking analytically about uncertainty and the effect of potential alternative futures on corporate performance.  We should examine the assumptions about the future that underpin our expectations of that performance and how it will be achieved.  We should examine what has gone wrong elsewhere for lessons as to how wrong we may be in our assumptions and how the environment can shift unexpectedly in terms of context, macroeconomic conditions, consumption preferences or technology (and how we may position ourselves to exploit such changes).  We should scan the environment for emerging risks or for indicators of such shifts in operating conditions.  And we should be very careful – skeptical, indeed – about apparently self-evident truths.  They are better left to the Founding Fathers.

Advertisements

13 thoughts on “Separated by a common language?

  1. Peter,
    We did not have common risk terminology up to now and you can see the situation of the world. The world is full of evidence having with(out) common terminology and have failed so far. You might have been correct if you find someone that has implemented common ERM terminology in his/her organization and failed, but you do not have this evidence so how do you conclude that common terminology is not needed? Please show any evidence supporting your comment.

    • Alpaslan

      Enron had a common risk terminology set; indeed they had more than one. They were an engineering firm and a financial risk management firm. They failed. All the financial institutions bailed out under TARP in the US or by the UK government or other EU governments had a common regulatorily-developed language for risk; they all failed. Many firms that have not had common terminologies have failed. Abengoa, the Spanish firm we have discussed in another forum has an ISO-compliant ERM approach and has lost 75% of its value.

      The issue is not whether or not firms have a common risk terminology. It is what they do to form their understandings of the risks they face and what they do with that knowledge. It is more to do with the boundary issues of inferential utility of knowledge than terminology. Re your point, the ‘post hoc, ergo propter hoc‘ fallacy applies – after the fact, therefore because of the fact. There simply is no line of causation. But there never has been.

      Everybody brings a knowledge of risk to their daily work lives. As John Adams has pointed out, learning to walk or ride a bicycle involves risk (as I recently discovered at the cost of two broken ribs and a pucntured lung – gravity still works). Anyone who has done one of these things has learned about managing risk at some level. We do not need externally validated glossaries to understand that things can go wrong. We do not usually need externally derived glossaries to tell us what might go wrong. All they do is add to the effort of thinking about risk and distort peoples’ naturalistic reactions to risk. The language argument is there for the benefit of promoting the role of the risk manager rather than improving the quality of the management of risk and uncertainty.

      And, in any case, most of the glossaries available are simplistic and, at one level or other, technically wrong.

      The failures in banking were not terminological. They related to a complex set of factors that combined with a very well understood set of factors. The solutions to the banking crisis are not to be found in raising capital levels (although reducing leverage clearly reduces risk) which also starves off lending resulting in a pro-cyclical contraction. Little or no serious thought has been given since the crisis to addressing the real issues of systemic inter-connectedness or capacity for institutional managed failure. The taxpayer is still on the hook.

      The issue of evidence cuts both ways. There is almost no evidence that imposing consistent terminology improves the efficacy of firm’s routines for managing risk. But in the vast majority of cases, use of terminology as it appears in everyday usage will suffice for non-technical purposes and reduces confusion and adverse reaction among participants in the debate (per Matthew’s point).

  2. People seem to me to be more comfortable with words like uncertainty, probability, and value than with risk, risk appetite, risk velocity, and other ‘risk’ jargon. Plain language won over and over in my 2010 survey of terms related to ‘risk appetite’.

    You are quite right that the mathematical/scientific tradition has established a nearly common language that is better than anything that ISO is likely to invent for Guide 73 or ISO 31000 in future editions.

    Even Grant Purdy, an ISO 31000 stalwart, has published an article saying that the word ‘risk’ is such a problem that he is tempted to stop using it and move on to something that better identifies the intended scope of the standard, in a society where ‘risk’ already has a meaning that everyone knows concerns bad things that might happen.

    Anyone wanting to act on your points about ‘common language’ can do so whenever they write a document about ‘risk’. If the document has to use the word ‘risk’ for its topic to be recognized, simply use it once in the title and then switch to plain English terms like uncertain, probability, money, and value for the body of the document.

    I have carried out some tests of clarity by taking paragraphs originally written in ‘risk’ jargon and rewriting them with plainer, more specific words like uncertain and probability. The improvement in clarity is striking. Not only can other people understand it better but it seems to me much harder to write something mistaken if you avoid the ‘risk’ jargon. Despite the big difference in clarity, I doubt if many people would even notice that ‘risk’ was largely missing.

    • . . . And you achieve greater clarity. Risk and uncertainty are not the same thing.

      As the Nimiipuu chief Rolling Thunder said: “the earth and I are of one mind”.

      Begs the question, who really keeps it simple: the observer who acknowledges complexity or one who denies the existence of complexity even when it is present?

      • Peter, with reference to “Risk and uncertainty are not the same thing” I should just clarify that in my re-write comparisons I didn’t just substitute the word ‘uncertainty’ for ‘risk’, which would be a mistake and doesn’t help much at all. Removing the ‘risk’ jargon takes a bit more thought than that. It’s still very much worth the effort, and of course the easiest way of all is just not to use the ‘risk’ jargon at all.

  3. I’ve seen some discussions online and in print (though I can’t for the life of me track down where) that seem to imply that Knight’s distinction between “uncertainty” and “risk” isn’t really relevant today. Here’s what I could dredge up a while back:

    http://www.linkedin.com/groups/Frank-Knights-distinction-between-Risk-4283266.S.95065768?qid=4e3173bb-a1c0-431f-8e73-3506b3be9d46&trk=group_most_popular-0-b-cmr

    http://www.bogleheads.org/forum/viewtopic.php?t=87919

    How much truth is there to such assertions? If Knight’s definitions are a dead end, do the “modern” definitions of risk and uncertainty bring us all a little closer together conceptually?

    • The perspective that seems to do it for me is the ‘uncertainty about frequencies’ perspective. I don’t find Knight’s concepts at all useful and agree with Doug Hubbard that they’ve been a bit of a diversion.

      What is clear from work in the last few decades on the Reference Class problem and on skill scores for probabilistic forecasting, is that it helps to appreciate just how much of probability is for us to choose.

      If the outcome of a particular future situation is uncertain, then we have a variety of ways to apply probability thinking to that. We can choose to see the situation as an example of a broadly defined situation type that occurs many times and for which we have a lot of data. If we do that then we will have great certainty about the frequency of alternative outcomes, but realise that this may not be using much of the information we have about the particular situation for which we want a probability. Alternatively, we can choose a reference class that is much narrower, taking into account more facts about the situation. This will leave us with much less experience and much less certainty about the relative frequencies of alternative outcomes.

      The end result of these choices is a series of probabilistic forecasts that are better or worse depending on our skill (data, choices, reasoning). The more skill we have the faster we can make money, especially when betting against others with less skill.

      So, if anyone is reading this and thinking “what is this mumbo jumbo that Matthew is spouting?” let me just say that understanding this –> good probabilities –> more money

      My book, A pocket guide to risk mathematics, starts off with these ideas and takes it from there.

  4. Peter all your thoughts are true but, within the perspective of personal risk management. The firms you gave as an example do not have common understanding of the terminology. Having a written document does not show they have been using it. Similarly, as you suggested we have dictionaries, why people do not use just dictionaries to understand each other? As I wrote on other topics, the meaning behind the words must be understood AND should be act accordingly by all parties in order to be able to say they have a common terminology.

  5. Alpaslan

    Two points I would make in response: (a) the personal and the commercial or professional are not separable. People do not have two judgment sets or routines; they may weigh factors differently in different settings but do not change their fundamental bases for reasoning or inferential competence simply because they move between risk environments; (b) you set the bar very high here, and I think unrealistically high. Now you require (i) that there be a glossary, (ii) that people are informed about the glossary, presumably in some structured way, (iii) that people understand the glossary presumably implying also something about its quality and utility, (iv) that people access and (v) apply the terminology and (vi) do so correctly.

    That just seems to me to be a lot of additional assumptions that are not justified in my experience. Also, where there are existing and, indeed, potentially conflicting professional terminology sets already present in the firm – say between finance and engineering – you set up the need for a debate about terms that does not move forward understanding and creates antagonism – exactly what you would hope to avoid.

    The problem is almost greater when there is external terminology through regulation. I am sure you will have experienced the debates on meanings of terms between credit, market and operational risk personnel in your institution. Personally, I believe the answer is to allow alternative terms and uses to flourish and to ensure that the issues are resolved analytically rather than semantically. It creates less conflict and unhelpful debate and focuses attention on analysis and action.

    Let people bring their own baggage to the table (to mix a metaphor); just make them focus on the analytically useful aspects once they do.

  6. Eric

    After reviewing the comments on the threads you have identified, I am more than a little bemused. Having studied economics and international relations – very different but complementary fields – I am always a little amused when people cite economists selectively or partially; it is a huge field and impossible to know it all (especially when not an academic). But the political scientist in me is always more than a trifle annoyed at the presumed moral superiority of some economists, and that is what comes through in the threads.

    While I hesitate to criticise without a deeper knowledge of his work than his posts on the thread, I do not find Hubbard’s discussion of the issues either insightful or accurate. His assertion that Knight deviated from other professions’ uses of terms is irrelevant. Prior to Knight, there was no clear distinction between risk and uncertainty in economics. In his masterwork published 7 years later, Keynes uses the terms without defining them; he does not refer to Knight.

    The way in which actuaries use the terms risk and uncertainty and what they mean by them are interesting and relevant, but hardly conclusive. More importantly, actuaries fudge the epistemological issue: if there is irreducible uncertainty, then risk cannot be priced. The reference to physics is misplaced. Heisenberg was stating something else entirely; his uncertainty principle bears no relation to problems of induction, but to physical observation.

    The reality is that there is simply no escaping the underlying epistemological challenge of induction: you cannot know the future. Previously assumed conditions can change without warning. Taleb uses the example of previously-unknown cygnus atratus; I prefer Bertrand Russell’s example of the chicken that has been fed for 999 days expecting his feed on the final day in the usual way; the farmer appears with an axe.

    Hubbard states: “so, even in the sloppy real world, uncertainties and risks can be quantified to make a set a priori probabilites.” Well, yes they can. But here Knight is instructive in his differentiation between objective and subjective uncertainty.

    In some of his comments about Knight, I do not recognise what I have read of Knight; again, I think he is failing to use Knight’s classification, relying instead on that popularly ascribed to him.

    His comment I find most surprising is the following: “I can always assess my own uncertainty about something.” Well, no you cannot. And here, Russell is pretty clear. The future is uncertain; that is all there is to it. I think he simplifies Bayes on this point also, although my knowledge of Bayes’ actual writings is scant.

    I also think his discussion of human judgment of error is enormously overconfident. Not becuase he is wrong but because he is confusing experimental results about judgment under uncertainty with decision-making in a real environment. But, surprisingly, I agree with almost all his conclusions.

    The problem is typified by his quote from George Box: “all models are wrong but some are useful”. What Box went on to say is that the important question is how wrong they need to be before they stop being useful. And that is the point: how heavily we rely on models. And that has a behavioural element. If people rely on models and the utility of models runs out, will their reliance exceed the utility of the model?

    Like the discussion in the other thread, his arguments are in favour of modeling of risk in financial markets; he assumes they are generally applicable across all settings of risk and uncertainty. I disagree. Bertrand Russell’s chicken still dies.

    On the other thread, there is a suggestion that Knight was overthrown by von Neumann and Morgenstern. That is certainly not how I recall their writing. On the contrary, their utility axioms were highly stylised and made a raft of restrictive assumptions, subsequently reworked by Arrow who subsequently collaborated with Debreu to turn it in to apply them to a Walrasian auctioneer; general equilibrium theory was born. But I fail to see how their work overturned Knight; they were talking about somewhat different things. That said, risk (as we use it today) applies Rowe’s definition of exposure path and consequence value.

    My only other observation (I could go on for hours) is that the quote by Merton overlooks the very important insights of his father, Robert Merton Snr. I find the logic of his best known work, ‘Unanticipated consequences of purposive social action’, captures the problems perfectly.

    Knight’s observations are not a dead end. On the contrary, the are just as relevant now as they ever were. And reintroducing uncertainty back as the central proposition and problem of risk in a corporate setting is not only desirable but, in my mind, essential. The on-going presumption that we can model the future reliably is hubristic and dangerous. As we have seen. As we have always known in international relations. But that does not mean throwing away models; it just means George Box was right also.

    I hope that clarifies.

    • Peter – From what I’m gathering, you seem to espouse a frequentist view of probability, although rather reluctantly. From a subjectivist standpoint, the “farmer with the axe on the 1000th day” might be better viewed as simple randomness–but only as long as no evidence was ever available to the chicken!

      If other chickens that had been there longer suddenly disappeared, if strange noises could be heard each time, any chicken with a little sense would begin to alter its evaluation of the situation, and perhaps attempt to find out more regarding the length of stay for its “disappeared” comrades, thereby reducing its subjective uncertainty.

      It would probably never get an exact answer (in this metaphor, even sensible chickens can’t count), but it might learn enough to realize that it should probably try to find another source for room and board.

  7. Matthew, Eric

    The reference class issue (as well as data choice) hits the nail squarely on the head. Although, Matthew, I need to split a hair here (I need all the hairs I can get).

    You state: “The more skill we have, the faster we can make money, especially when betting against others with less skill.” I would amend as follows: “The more skill we have, the more quickly we can [expect to] make money, especially when betting against others with less skill.”

    After all, Bertrand Russell’s chicken still dies; the future remains uncertain. Estimation is not determination.

    • Hi Peter, technically you are correct about ‘[expect to]’ because this is a result based on mathematical expectations. I don’t really doubt that, on average or however else one might express it, those expectations tend to be justified!

      I have recently carried out two pilot studies in conjunction with the University of Southampton in which students bet with imaginary money on difficult general knowledge questions. The finding both times is that betting using expected values based on the students’ subjective certainties outperforms (on average) purely statistical strategies based on their accumulating experience of the questions. This is true even though those subjective certainties are poorly calibrated in most cases. The explanation for this is probably something to do with using more information to produce probabilitities with a higher overall skill even though they have poor calibration. We are trying to design a study that explores this more rigorously.

      Peter, do you think that regulators of banks and insurance companies understand these issues about probabilistic forecasting skill and have applied them in their regulations and reviews? Do they focus only on calibration of probabilities and, if so, is that because they don’t understand yet or because they do understand but only care about calibration for some reason?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s