Peiying Chua explains how financial institutions can ensure their implementation of AI does not run afoul of lawmakers and regulators.
From combating Covid-19 to the development and trialling of driverless vehicles, artificial intelligence (AI) has drastically transformed entire industries like healthcare, mobility and e-commerce.
The financial services sector has also felt the impact of AI disruption, which has taken the sector by storm. AI is currently utilised to varying extents across a variety of functions, ranging from risk management, customer onboarding and asset management, to trading, insurance and robo-advisory.
The current state of AI in finance
AI is no longer a pipe dream, but a necessity for financial institutions to compete and stay relevant in a hyperconnected and digitised world.
However, the journey has not been an entirely smooth one. Certain elements of machine-learning deployments have raised legal and regulatory questions – just see the debates surrounding the recently announced Metaverse. There is fervent debate around how closely machine-learning deployments are adhering to regulatory compliance and assigning liability.
AI is a game changing technology that presents unique and ethical challenges. Resultantly, AI regulation has barely been able to keep up with the pace of innovation.
Common pitfalls when implementing AI in finance
Looking at innovative technology like AI through tried and tested regulatory lenses is not straightforward. There are pitfalls that financial institutions should be wary of, and tough questions they must be prepared to answer in the case of any hiccups:
- Resilience: What do banks do if their third party AI provider fails? Can they easily switch providers in case of black swan events? More data means greater risk of leaks or breaches – are we all prepared?
- Ethical deployment of AI: Can you assume that something as complex as AI will “respect” human autonomy and human rights and abide with basic ethical concepts, such as prevention of harm, fairness and accountability, as well as avoiding biases and protecting vulnerable communities?
- Accountability: Who shoulders the responsibility if something goes awry?
- Transparency: How transparent do you have to be with your customers?
AI regulatory landscapes: Singapore vs the EU
At present, Singapore does not have any plans to introduce AI-specific regulation. The approach taken so far has eschewed a prescriptive model, with the introduction of guidelines and non-binding frameworks for market players to adopt as appropriate.
In this regard, the deployment of AI within the financial sector will require meticulous consideration of existing regulations, especially given the rigorous scrutiny the central bank and regulatory body subjects the wider financial sector to, and the unknowns around AI.
There are frameworks in place around governance, control, risk management, outsourcing, data protection and cybersecurity, against which potential AI deployment will need to be evaluated to see if it complies. It is however worth considering whether more can be done to strengthen Singapore’s status as a global financial and technological hub.
On the flip side, the EU blazes a trail when it comes to regulation of AI. Its flagship General Data Protection Regulation (GDPR) regulates personal data that fuels AI and is considered a gold standard for data protection and privacy law.
Last year, the European Commission published its legislative proposal for regulating AI, aiming to ensure consistency of rules across the bloc’s member nations. The proposal is nation- and industry-agnostic, and would subject high-risk AI systems to various measures including regular risk assessments, harm mitigation measures like log maintenance, using high quality datasets for training, and detailed technical instructions which must be followed by users of such high-risk AI systems.
This level of detail does not mean that the legislation is perfect, but it is certainly a significant statement of intent and is likely to influence other policymakers around the world.
With more financial institutions looking to rapidly scale their application of AI and machine learning in the year ahead, regulators are looking at board-level engagement and robust governance that will enable regulated firms to deal with challenges that will no doubt arise when deploying these technologies.
Enforcing AI Regulation
Financial regulators in various jurisdictions have introduced regimes to enhance individual accountability within the financial services industry. These regimes potentially aid in applying accountability to the use of AI in financial services.
In Singapore, the Monetary Authority of Singapore’s (MAS) Guidelines on Individual Accountability and Conduct (IAC Guidelines), which applies to regulated financial institutions, aims to promote senior managers’ individual accountability, strengthen oversight of material risk personnel, and reinforce standards of proper conduct among all employees.
This would also include the specification of responsibility and accountability over AI systems. The IAC Guidelines are likely to be utilised as a tool for ensuring firms are responsible for evaluating AI-related risks and assign responsibility appropriately. Firms leveraging AI need to consider who is ultimately responsible for the technology – be it deployment, operation, or results.
The European Commission plans to consult with member regulators to look at the feasibility of developing regulatory guidance on AI applications in finance that would work hand in glove with the bloc’s legislative proposal. This would build on existing reports that outline and detail the impact of data and analytics around ethical AI use.
To address the specific risks that AI poses, policymakers worldwide are formulating programmes to promote governance and risk management for AI deployment within financial services, primarily through guidance rather than enforcement.
The MAS has a set of principles aimed at promoting fairness, ethics, accountability, and transparency (known as the FEAT Principles) in the use of AI in data analytics in finance. These principles guide firms offering financial products and services on the responsible use of AI and data analytics, bolster internal governance around data management, and enhance public confidence in the use of these technologies.
The European Commission is also working on a Digital Operational Resilience Act (DORA), which would mandate that technological service providers to financial institutions conduct risk assessments, establish incident reporting processes, allow for information sharing, and mandate strict content requirements for contracts with financial institutions.
Under DORA, these service providers, deemed critical to the financial sector, would be monitored by a supervisory authority to ensure that they have implemented comprehensive, sound, and effective rules, procedures, and mechanisms to manage the technological risks they may pose to financial institutions.
Navigating new data protection frameworks
Data is imperative to AI development, and data protection around AI will extend to financial institutions, especially in scenarios where there is the potential for consumer biases, such as using AI to assess loan eligibility or gauge risk appetite.
While the Singapore Personal Data Protection Act 2012 (PDPA) does not single out AI, it lays out a data protection framework on the collection, use and disclosure of personal data by private sector organisations in Singapore to safeguard personal data, buttress public trust in the digital economy and spark data innovation.
To process an individual’s personal data, the PDPA requires an organisation to have a processing ground to do so, such as obtaining the individual’s consent, otherwise it should fall under an exception under the PDPA (which includes national interest, or publicly available information). Recent amendments of the PDPA also introduced a “business improvement” exception which may be helpful for the use of AI for internal business purposes.
This brings the PDPA closer to the EU’s GDPR – which has been heralded as a watershed piece of regulation when it comes to data governance. GDPR requires organisations to be transparent with the data they collect and how they intend to use it, justify said use, empower individuals to control how their data is used, ensure that such data is kept secure, and undertake data protection impact assessments whenever the data processing is likely to result in a high risk to individuals.
There are hefty penalties in place for flouting any of these rules, and regulators from the bloc’s member states also help to keep companies in line. The GDPR is such a standard bearer, that it has also inspired several similar models in China, India, and South Africa.
Minimising risk, maximising opportunities
Where does this leave financial institutions in both jurisdictions? While recognising what AI can do for their business and customers, organisations need to be progressive and ethical when anticipating the ramifications and implications of these technologies.
It is imperative for firms to take pre-emptive measures when complying with existing and future AI regulations and they must continue to be considered at every stage of technology adoption. Embedding compliance into the design of AI tools themselves should be a key priority for financial institutions to stay ahead. Upon adoption, ongoing monitoring procedures, open lines of communication with customers and transparency with the authorities will help to ensure they do not fall afoul of the law.
Financial regulators generally adopt a technology-neutral approach to enforcement of laws and regulations, although they are taking steps to develop regulatory frameworks governing the use of AI. Thus, firms will need to map modern technology against existing laws and regulations, and consider the evolving regulatory frameworks at home and especially abroad if they are seeking to expand and do business beyond their borders.
They need to evaluate how new offerings fit within different regulatory frameworks in different jurisdictions such as Singapore or the EU, and constantly track developments and changes to these frameworks. Ensuring that legal and regulatory compliance is in good shape at the very beginning can be a game changer for financial institutions looking to fully leverage AI.
—
This article was contributed by Peiying Chua, Financial Regulation Partner; Yong En Koh, Associate; and Sheryl Khoo, Paralegal, at Linklaters in Singapore.
