A prevailing assumption, among firms and regulators alike, is that misconduct problems can be discovered only after they occur: a ‘detect and correct’ mindset. But we’re beginning to see the emergence of a ‘predict and prevent’ approach to managing conduct risk in organizations.
In her landmark study into the causes of the 1986 space shuttle Challenger disaster, sociologist Diane Vaughan expected to find that NASA managers and engineers had violated established rules and processes. Instead, she found that the disastrous launch decision was arrived at through “conformity to cultural beliefs, organizational rules and norms, and NASA’s bureaucratic, political and technical culture.” She coined the phrase, “the normalization of deviance,” to describe what had happened at NASA.
By “normalization” Vaughan meant that decision making processes and action choices that deviated from formal established rules were reinterpreted by work groups in a manner that allowed for those choices and actions to sit comfortably within the bounds of established performance norms. That is, “they redefined evidence that deviated from an acceptable standard so that it became the standard,” she wrote.
Conduct & Culture: Drivers of Financial Crime
Vaughan emphasized the “social construction of risk” and this theme was heard clearly on 23 July in a panel discussion on “Conduct & Culture: Drivers of Financial Crime” hosted Regulation Asia as part of its Fraud & Financial Crime Asia 2021 conference.
Panelists included Kate Miller, Head of Conduct & Behavioural Analysis at Standard Chartered Bank; Jason Lee, Managing Director for Legal and Regulatory matters at Singapore sovereign wealth fund, Temasek; and Wendy Ennis, Head of Financial Crime Compliance at Mox Bank. The discussion was moderated by London School of Economics professor of organizational psychology, Tom Reader, who studies how norms and behaviors relating to risk management emerge within organizations and contribute to outcomes.
Reader’s current work investigates how organizations can better detect and respond to signals of impending and serious failure, proactively. In a paper recently published by the Bank of England, he discusses the use of ‘unobtrusive data’ in the course of studying culture within organizations, and the conduct it promotes.
Opening the panel discussion, Reader observed that regulators, boards and management often focus on risk governance “inputs” – like formal value statements, training protocols, etc. – and then examine subsequent performance outcomes – or “outputs” – to see that the inputs performed as expected. Missing entirely is a study of organizational “throughputs,” or data that reflects a firm’s culture-in-action. It is here that we may discover a normalization of deviance.
Since the 2007-2009 Global Financial Crisis, Mox Bank’s Wendy Ennis observed, investigations by regulators and the media into misconduct scandals at financial firms, and subsequent regulatory actions, have placed an emphasis on the personal accountability of senior leaders for setting a firm’s culture and the behavioral predilections such cultures promote. While this is important and perhaps necessary, Ennis argued that we need to dig deeper, by enquiring into values and beliefs as they operate at the individual and collective level.
Formal systems and processes for risk governance, Ennis noted, will fail to produce desired outcomes if not supported by an appropriate culture. While tone from the top is important in this regard, she allowed, “tone from the middle at least as important.” Firms must therefore maintain an atmosphere of “psychological safety” that encourages employees to speak up when they believe things have gone amiss. That is, firms need to ensure that their employees will call-out deviance, rather than permit its normalization. Assessing cultural proclivities in this direction, Ennis argued, should be a management priority.
But Reader observed that culture is often viewed as being too “wooly” to submit to formal metrics – a view he clearly challenged. While Kate Miller at Standard Chartered agreed this argument is heard regularly, she argued that it does not withstand scrutiny. Culture, Miller maintained, is both felt and seen day-to-day, and can therefore be observed and studied. Because it is constantly evolving, culture must be studied in this real-time and lived-in context. A mix of behavioral and data science has begun to make this possible, she noted, and banks like hers are beginning to trial the use of tools that leverage these approaches.
Temasek’s Jason Lee pointed out that a concern for culture measurement is not confined to the banking sector. “Banks do not have a monopoly on bad behavior,” Lee noted, pointing to legal and regulatory action in other industries in the wake of countless corruption scandals. In fact, he argued, probing culture for signs of potential misconduct may be more important in industries not subject to the same degree of regulatory scrutiny as that which prevails in the financial sector.
‘Misconduct by design’
Reader observed that people do things which they know to be wrong, but which they somehow didn’t think to be wrong at the time their action was taken. This raises the question of ‘bad apples’ versus ‘bad barrels.’
Miller noted that humans are predominantly social beings and so, understood in that context, it is the barrel that matters. She remarked that, since 2008, we have focused on willful and intentional misconduct that tends to be individual in nature. But what we continue to see day in and day out, Miller argued, is that the workplace environment may erect structural drivers of misconduct. She called this “misconduct by design.”
A key difference in the behavior that is seen in banking as compared to other industries, Reader suggested, is that there is perhaps a greater readiness to take risk in finance. Responding to this, Temasek’s Lee argued that there is nothing new in noting that people may at times face tension between meeting financial goals and doing what is required ethically. How people navigate this tension is driven by firm culture, and this is true in banking and elsewhere.
Building on that view, Ennis argued that misconduct is not often malevolent in nature but, rather may reflect “a slip in standards.” Covid has heightened the risk of such slippage, she said. The use of technology tools in this context is therefore increasingly important, as is prioritizing discussion of such issues frankly and directly with workplace teams. Reader referred to this as to “listening to the mood music of the organization,” and suggested that behavioral science had much to offer in this context.
Managing from the front-foot
Reader commented that it is much easier to pick up on errors of commission rather than errors of omission, pointing specifically to the normalization of deviance in this connection. So how are we to make tell-tale signs of trouble concrete and actionable – before problems arise?
Temasek’s Lee argued that if culture is defined as the norms and values of an organization, such may be fairly seen as intangible in nature. But if conduct is viewed as reflective of such norms and values, then such conduct may be taken as a highly tangible manifestation of organizational culture. That is, we may be able to work backward from observable behavior to the underlying values and norms that promote it.
We focus on the manifestations of culture, Lee continued, arguing that this is perhaps but the tip of the proverbial iceberg. With advancements in AI and machine learning, he argued, we can now get ‘beneath the surface’ and identify things that need proactive investigation and perhaps behavioral science interventions that shape “choice architecture” in ways that are helpful.
Ennis agreed, arguing that the standard tick-the-box checklist approach to risk governance is inadequate. However helpful, such checklists address only what is visible above the waterline. It is more important to delve deeper to learn how tone from the top is landing with staff, subjectively and qualitatively, “in real life.”
Here, Reader noted, firms may make productive use of the “mounds of data” that they already possess, with a view to identifying cultural nuances at work in real-time. Miller built on this argument, calling the use of data in this context “the next generation of risk analysis.” She pointed here to Reader’s work on the use of unobtrusive sources of data.
Advancements in AI create new capabilities for organizations to use their existing data differently, and to derive latent value from such. Reader added that this may help to overcome a key limitation of traditional staff surveys: as culture worsens, people are less and less willing to speak up and, thus, survey responses become less and less reliable as a means of gauging culture. Going forward, Lee argued, measuring culture will involve multiple indicators and approaches to present a more holistic view of the firm.
Key areas of future focus will therefore be: (1) making better use of data with AI and machine learning tools that remove the anecdotal bias of mere “gossip,” (2) bringing key stakeholders into the effort, to include investors, government, the community that an organization seeks to serve; and (3) staff turnover – particularly in control functions – is an area that will receive closer attention, as such in and of itself may serve as an early indicator of pending problems.
In sum, while regulators and firms have in the past viewed misconduct risk as a problem that can only be addressed after harm is done, this ‘detect and correct’ mindset is being displaced rapidly by a ‘predict and prevent’ mindset. Studying culture so as to anticipate trouble proactively positions management to operate from the front-foot: a capability that is increasingly expected.
Stephen Scott is a risk management expert and founding CEO of US-based RegTech firm Starling. To read more of his work, download Starling’s latest annual Compendium, which summarizes trends in the supervision of culture and conduct risk among banking industry regulators and standard setters.