The report provides insights about the risk management frameworks and overarching supervisory principles banks use when adopting artificial intelligence.
The HKIMR (Hong Kong Institute for Monetary and Financial Research), the research arm of the Hong Kong AoF (Academy of Finance), has published a new report assessing the current status of AI (artificial intelligence) adoption in the Hong Kong banking industry.
The findings are based on an industry-wide survey conducted by the HKMA (Hong Kong Monetary Authority) in August 2019, which found that over 80 percent of participating banks viewed AI adoption as a way of reducing operating costs, improving efficiency and strengthening risk management.
According to the survey, banks in Hong Kong are most commonly using AI applications in middle office functions such as AML, cybersecurity and KYC due diligence, however they are broadening adoption in key functional areas including front-line businesses, risk management, back office operations and customer services.
Some 80 percent of banks also said they plan to increase investment in AI over the next five years, though highlighting adoption challenges related to data quality, data protection, model explainability, a shortage of talent, and regulatory issues.
The HKIMR report provides insights about the risk management frameworks and overarching supervisory principles banks use when adopting AI. In particular, it highlights the need to identify potential weaknesses in banks’ cyber defence systems through regular tests, and to assess AI applications for resilience to more sophisticated cyber attacks.
“Strengthening the cybersecurity of the most important and vulnerable operations of banks and enhancing the security features of cloud computing will become increasingly important,” the report says.
In addition, it says robust data governance frameworks can mitigate the risk of data breaches or data flaws, while enhanced model risk management frameworks incorporating big data analytics and machine learning techniques can address risks related to model design and validation.
“The successful implementation of an AI governance framework requires the alignment of objectives with business goals, as well as good communication with major stakeholders,” the report says. “As there is no one-size-fits-all solution, the management needs to remain flexible and pragmatic in the execution of governance policies.”
While more extensive use of AI by banks may help regulators “develop new thinking” when monitoring and assessing risks to financial stability, it also presents new challenges and complexities.
The HKMA, the report notes, has adopted a supervisory approach based on two principles – technology-neutrality and risk-based supervision – as it seeks to foster the development of new AI applications in banking while ensuring the prudent management of technology risks.
> ALSO READ: HKMA Emphasises Consumer Protection in BDAI Use (6 Nov 2019)
The report also points to some of the challenges faced by regulators in supervising AI adoption by banks. “Regulators need to equip themselves with knowledge of AI and data analytics to cope with the increased complexity of AI models used by banks,” it says.
Banks are currently using AI to streamline and automate compliance procedures, specifically in the areas of regulatory reporting and fraud detection. Regulators, meanwhile, are using the technology to enhance their supervisory capacity, including through new initiatives to automate data collection from banks and explore the feasibility of introducing machine-readable regulations.
Policymakers can foster the proper use of AI by providing a favourable environment and a transparent supervisory framework, the report says, adding that initiatives to strengthen public-private cooperation can also be useful in promoting knowledge exchange, experience sharing and talent development.
The full report is available here.