In a recent Bureau van Dijk webinar, industry experts examined how data, technology and AI can help mitigate the risk of criminals benefitting from the Covid-19 cash pump.
With the plethora of government and financial institution initiatives around the world, aimed at quickly doling out billions of dollars and pounds being rushed through by governments under fire, it may be no surprise that some of that money has ended up in the wrong hands.
Addressing the heightened financial crime risks as a result of Covid-19 in a recent webinar, Bureau van Dijk senior director, compliance solutions, Bill Hauserman said the intention to offer financial support to salaried staff and small businesses was necessary, though highlighting in particular that the US system was unprepared.
“A lot of that money never got to the people that really needed it,” he said. “We’re definitely in a very dangerous area where we could see a trillion dollars disappear and we will still have the same people hurting.”
In Europe, there are also fears that organised criminal networks are exploiting the Covid-19 situation. In the same webinar, Transcendent Group partner Holger Pauco highlighted instances where criminals worked fast to phish companies’ identification information, then used this data to apply for stimulus money to be deposited into different accounts that they controlled.
“What we are seeing is that they [the criminals] are very inventive when it comes to this and sometimes they’re exploiting the online world far faster than the compliance industry,” said Pauco, a specialist in governance, risk and compliance.
Samantha Sheen, founder and managing director of Ex Ante Advisory Limited, pointed to fragmented KYC parameters across Europe that could lead to inconsistencies and gaps which criminals can exploit.
She highlighted a recent letter the UK Financial Conduct Authority (FCA) issued to the CEOs of regulated financial institutions in the country, offering guidance on how they should navigate the challenges brought about by the Covid-19 pandemic.
The letter was interpreted by some as a loosening or relaxation of non-face-to-face KYC requirements. But, the letter also stated that any risk must be mitigated when completing identity checks on customers, Sheen highlighted.
“The Law Society here in the UK has given very specific instructions about what to do in terms of using those alternatives if people are doing KYC from home, what they need to be able to access to do that proper ID. It’s quite an interesting and varied landscape out there.”
Too much data
To address heightened financial crime risks, organisations access to increasing amounts of data today than ever before, from both internal or external sources, including ‘unstructured data’ such as adverse media.
“I think one of the most important steps is to utilise all the available internal data for your overview on clients and investigations,” Pauco said, highlighting that many banks today feed all the data they ingest into large data lakes. He also stressed there was still a need to have access to a number of external data sources to fill in gaps.
However, Pauco raised concerns that there is now potentially too much data for existing systems to consume. There is a real need to evolve the IT landscape – especially in the area of investigation and KYC actions – to deal with the large volumes of data and information and avoid information overload, he said.
According to Hauserman, similar inefficiencies that were around a decade ago still persist today. “We are spending way too much time on the data discovery and not enough time on detecting risks, finding the invisible risks. That’s really what Moody’s and Bureau van Dijk have been trying to stress,” he said.
Sheen hopes to see more regulatory clarification in particular on adverse media checks and what data can be relied upon when conducting customer due diligence and KYC checks.
Referring to amendments proposed under the Fifth Money Laundering Directive (5AMLD), she said the guidance on what information can be deemed credible and reliable is not particularly clear.
“It doesn’t make it very clear what those considerations need to be and I think the real challenge is that many criminal cases that I’ve seen involving financial crime have these little breadcrumb trails that start from negative media reporting,” Sheen said.
The AI dilemma
According to Hauserman, there is now too much information to consume by traditional means, requiring the use of technologies such as artificial intelligence (AI) and robotic process automation (RPA) to identify the most important data points. For instance, the complexity of cross shareholding information today is so deeply obscured it would take an enormous amount of man hours to unravel, if at all, he said.
“What the AI has been doing is finding those relationships which were completely invisible before, we’re unmasking these. Those relationships aren’t even visible today, unless you’re using sophisticated technology,” Hauserman said. While there may not be anything illegal in the relationship structures themselves, the sheer complexity requires AI to disentangle the information and offer clarity.
But Hauserman cautions that AI is not something you simply unwrap and then your process is instantly better. “Actually, it’s the opposite. It’s like if you’ve got a one year old and you’re trying to make that one year old into a PhD, you’ve got probably 20 years plus of work to do in teaching that one year old. Artificial intelligence is exactly the same way.”
Sheen noted that the conversation around AI in Europe is moving towards moral considerations of its use. “In fact the European data protection supervisor is also coming into the debate, they want to make sure if you’re going to use AI for predictive modelling, that you don’t unfairly bias people, and your access to the personal information is proportionate,” she said.
According to Sheen, firms that are implementing AI or machine learning as part of their financial crime prevention programmes have had some difficulty integrating the roles of the compliance staff and machines. In one example, a company that engaged a KYC bot ended up spending more time in one-on-one conversations trying to convince people to trust the bot than they spent actually performing KYC processes.
“The firm totally underestimated the amount of time they would need to persuade the compliance people to rely on it [the bot],” she said. “So, I think in some respects using this lockdown time is a great period to start showing people why it works and why it’s really useful.”
The full webinar can be viewed here.
This article was produced in conjunction with the Regulation Asia editorial team.