In a recent (29 July) article in the FT titled “Banks 20 years behind in risk management”, the author cited a survey by Corven, a consultancy, that indicated that “the largest banks and insurers are at least two decades behind their peers in the aviation industry in managing risk.”
The article continued:
“Respondents described 62 per cent of “major risk incidents” as attributable to culture, leadership or behaviour but 91 per cent reported that the response to such incidents had been to change processes and systems. Meanwhile, 93 per cent of financial institutions have no way of measuring culture or behaviour, according to the survey.”
The article itself, as much as the research, is revealing.
First, the title. In a press release the following day discussing the research, Corven actually states:
“Emerging themes suggest that the largest banks and insurers are many years behind their peers in the oil and gas and aviation industry in measuring and changing behaviours in response to major operational risk.” (emphasis added)
The sub-editors at the FT clearly believed “twenty years” and “two decades” would grab the reader more effectively than the rather anodyne “many years”, and, of course, they would be right. Also, wisely, the reference to the oil & gas industry was omitted from the article.
It is particularly telling that a survey of only “25 senior risk officers” should attract the attention of the FT at all. The number is very small and, from it, few reliable inferences can be drawn. The statement in the FT article that “four percent claimed they were proactive” could be rewritten “one respondent claimed his or her institution was proactive.”
Turning to the findings, “62 percent of ‘major risk incidents’ attributable to culture, leadership or behaviour” means people, pure and simple (and especially in the BIS operational risk categorizations). That 91 percent of respondents changed “processes or systems” is unsurprising; the alternative would be to change people.
However, the truly staggering bit comes next. To repeat “meanwhile, 93 per cent of financial institutions have no way of measuring culture or behaviour, according to the survey.” The truly interesting piece here is the other 7 percent (although how many organizations is 7 percent of 25? 1.75, by my calculation) believe they have a way of ‘measuring culture or behaviour’. Given that there is no meaningful way of measuring risk culture (and never will be) and that risk behaviour seldom reduces to objective metrics, it is the understanding of the 7 percent that must be called in to question.
The expectation that complex, emergent human phenomena are measurable and, that if they were, the metrics produced would be objective and operable is a comforting delusion but a delusion nonetheless. But it seems to sell newspapers (and ameliorate regulators).
Of course, the real story is that most banks really are well behind the curve in terms of both operating and risk and control systems. Our research on systemic risk published in 2010 (available here) highlighted this problem clearly. In some areas, ‘20 years behind present functionality’ may not be an exaggeration. To the institutions themselves must be added central bankers and regulators (payments processors and exchanges are considerably more advanced).
There is no doubt that banks’ and insurers’ systems investment programmes have lagged the exigencies of their business activity resulting in excessively complex systems architectures, abundant reliance on ‘legacy systems’ and contorted interfaces between systems that are as unreliable as they are unnecessary. The root cause of these problems, however, has not been lack of expenditure; it has been spending on the wrong things. The problem is endemic. The material question is why?
The answer is as unpopular as it is controversial. At a CSFI event at which I outlined our research and our findings on systemic risk, I was rounded upon by a well-known city figure (among others) for daring to suggest that the answer was right in front of us (or would be if we were standing on the south side of North Colonnade in Canary Wharf, just to the east of the DLR track).
Of course, it is not merely the Financial Services Authority that is to blame; in many ways, they have simply been a conduit. The problem lies in the artifice of the potential efficacy of on-going (dare I say ‘perpetual’?) regulatory change driven by zealous legislators and regulators in Brussels and Whitehall; Brussels, mostly. A similar, though uncoordinated flow has been evident in Washington and in other financial capitals.
The perpetual torrent of regulation has been a reaction to the on-going string of financial crises and institutional failures that have bedeviled the industry (and always will). The belief has been that these failures have been the result of insufficient regulation. The implication is that more regulation is better and will fix the problem(s). This profound assumption has proceeded virtually unchallenged except by industry bodies and the institutions themselves, who are (rightly) presumed to be self-interested.
The effect of the perpetual torrent of regulation has been to deny financial institutions the opportunity to plan over a reasonable horizon their systems strategies and to execute those strategies uninterrupted. Instead, institutions have had constantly to adjust and revise their systems development and replacement plans to accommodate the changes necessary to permit compliance. This has been especially true in the all-important area of risk management.
This problem has been compounded by constant merger activity and the notorious challenges of integration systems post-merger. Ironically, it is often mis-handled systems rationalisation that ultimately nullifies the presumed benefits of a planned merger.
For those firms operating across borders, there has been the additional problem of multiple flavours of each regulatory initiative. While, in some instances, this offers these firms the potential for regulatory arbitrage, it complicates the systems picture and results in patches in each country to achieve the specifics of that jurisdiction’s compliance requirements. For internationally operating firms (ie. big ones), these divergent regulatory approaches limit the benefits of a unified systems approach and substantially complicate core system replacement.
Another pervasive problem is excessive prescription of control activities in countries’ regulatory approaches. Rather than focusing on what firms must achieve, regulations also frequently prescribe how. The degree of detail prescribed by regulators around the world has two mutually-reinforcing effects: (i) it makes system design and configuration a potentially limiting factor for compliance and (ii) crowds out firms’ own initiative to develop and implement efficient regulatory responses. Both of these militate against intelligent systems design as a predictable source of competitive advantage. This, in turn, reduces firms’ appetite for and investment in systems replacement reinforcing the need to divert resources to maintain the compliance currency of legacy systems. The result is predictable.
However, at indsutry level, the biggest problem, all-but-ignored by regulators, is inconsistent data standards and unreliable data provenance in firms’ securities and customer data. The absence of consistent global standards for securities and customer data bedevils all attempts to improve data quality and, as a result, the quality and reliability of firms’ reporting to regulators. It makes forming a consistent and reliable view of systemic risk functionally impossible. Regulatory fiat, a la Solvency II, will not solve the problem (although it may force the industry to put in place work-arounds, thus creating more complexity).
At the regulatory level, national and global regulators have been delinquent in developing and defining a standardized reporting structure for risk exposures across classes. The absence of coherent industry risk data architectures holds back both regulatory efficacy and firms’ investment in their core information technologies.
Driven by uncertainty over regulatory direction and initiatives, the result has been a systematic and industry-wide predilection for IT spend to focus on short-term, compliance-driven upgrades to existing systems rather than longer-term strategic systems replacement and enhancement. Firms have then had to stomach criticism from regulators for the very systems management practices their perpetual torrent of regulatory change has fostered.
Recent IT failures at institutions in the UK and further afield are, beyond doubt, the responsibilities of the firms themselves. But European and national regulators need to be both far more conscious of their impact on firms’ systems investment decisions and of the utility of defining the data architecture they need to perform effective supervision at firm and system level.
Excessive regulatory prescription is strangling firms’ initiative and turning them in to compliance factories. The result weakens rather than strengthens risk management and encourages firms to retain and patch dated systems. Perhaps a regulatory rethink is in order.