The financial world has seen ever-growing use of technology over the past decade, with artificial intelligence (AI) emerging as one of the most disruptive innovations within investment management. The expansion of data sources and availability of AI tools to harness big data prompt the industry to get up to speed and be part of the revolution. According to International Data Corporation (IDC), banking was the top technology spender among all industries in Asia Pacific in 2022. The absolute AI spending totaled more than USD 2 billion, with 5-year CAGR reaching 25% from 2020 to 2025.
Despite the hype, AI in investment management is still in a formative stage. According to a 2021 report by the Hong Kong Institute for Monetary and Financial Research (HKIMR), almost half of Asia-Pacific asset management firms surveyed had no AI or big data applications in production, and a third were in the early phase of AI adoption, having deployed AI or big data in limited use cases. Given this nascency, CFA Institute examines the ethical aspects of AI implementation and provides firms and professionals with a decision framework to guide future developments responsibly.
Opportunities abound in APAC
Opportunities abound from AI adoption in investment management. In the CFA Institute 2022 Investor Trust Study, 340 institutional investors in Asia-Pacific revealed their expected types of business activities to routinely apply AI and/or big data to in the next 3-5 years. The top three applications were portfolio management (63%), risk management (58%), and sales and marketing (40%).
In the same survey, 86% of respondents expressed greater interest in investing in a fund that relies primarily on AI and big data tools than a fund that relies primarily on human judgment to make investment decisions. Compared to a global average of 81%, the results imply that institutional investors in Asia-Pacific have relatively high trust in AI.
Central to trust, however, is the ability of investors to interpret the outcomes of an AI-driven investment approach. When considering governance issues, institutional investors in APAC are primarily concerned with transparency of algorithms, protection of intellectual property rights, and operational risks.
In recent months, the rapid proliferation of AI with little oversight has generated alarm even among supporters. In March, a group of prominent figures in the technology sector, including Elon Musk, signed an open letter calling on the world’s leading AI labs to exercise caution. The letter emphasised that even the creators of AI systems cannot perfectly understand, predict, or reliably control their behavior. As such, it urged for a measured approach with ethical considerations and rigorous supervision.
We summarise the ethical considerations below:
1) Data integrity
Data must be checked and cleansed so that they are fit for purpose in an AI programme. Firms should adhere to data privacy laws and protections in the sourcing and use of data, and compliance considerations are important where developers use unstructured and alternative data. The recent Samsung episode is an example of why AI poses threats to data privacy and security of individuals. In three separate incidents, engineers at Samsung reportedly shared sensitive and confidential data with ChatGPT.
Information bias is another key consideration in this context. In the realm of machine learning (ML), categorical data refers to information that is classified into specific profiles, and which a programme is trained to identify relationships or make predictions. Despite prima facie objectivity, biased outcomes can arise from classification based on incomplete training datasets or inappropriate sampling approaches.
An AI application needs to be accurate and perform as intended, with good out-of-sample performance when applied to new data. In our “Handbook of AI and Big Data Applications in Investments”, we outline a few unique challenges of applying ML in Finance. Financial markets are one of the most complex human-made systems in the world, with low signal to noise ratios.
Another challenge is the small amount of available data, with security-level data points at best running up to 1200, compared to billions and trillions of data points available in a few other domains. Lastly, financial markets are challenging for ML applications because markets are non-stationary, i.e. they adapt and change over time, compared to say rules governing biology and health, which are relatively static.
Beyond bias or accuracy, generative models have been shown to produce fabricated information, a phenomenon commonly referred to as “hallucination”. Proper design, training, and ethical guidelines are needed to mitigate risks and prevent harm to individuals and organisations.
Interpretable and explainable AI are concepts that are becoming central to the development of AI tools and algorithms. Explainable AI encompasses tools that explain how a certain feature in an AI programme contributed to an outcome or the sensitivity of that feature to the outcome, thus improving transparency and interpretability. Interpretability is most challenging in machine learning algorithms where features are learned in decision layers that are hidden, such as neural networks (the so-called black box problem).
Professionals need to evaluate potential trade-offs between model accuracy and interpretability. More complex models have the potential to deliver superior performance, but understanding how such a model delivered its outcomes may be resource intensive.
Sufficient human oversight, governance, and accountability mechanisms must be in place to ensure accurate and appropriate outcomes from the AI programme. It involves ongoing assessment of the benefits of an AI programme against the costs and risks. Regular reporting and communication with clients are also crucial to understand and trust the workings of the AI application.
An ethical decision framework for AI adoption
Having considered the ethical principles surrounding the use of AI in investment management, CFA Institute suggests a decision-making framework to guide the ethical design, development, and deployment of AI tools.
- The first step is to define the problem, in terms of the issue at hand, target audience, and desired outcomes. A fundamental ethical consideration at this stage is to ensure that the application serves the best interests of the client.
- The next steps are to obtain the input data; build, train, and test the model; and deploy the model while monitoring its performance. Beyond ethical considerations, firms must put in place a broader framework that encompasses leadership accountability, risk management, and collective ownership of IT deployment.
- Finally, firms should ensure the relevant business units possess the necessary knowledge, skills, and competency in the areas of AI and data science, as they are sufficiently distinct from the expertise (investments) of the core staff. To develop such expertise, firms should evaluate against “buy or build” considerations.
For a full exhibit of ethical considerations and key questions professionals should evaluate along each step of the workflow, see “Ethics and Artificial Intelligence in Investment Management: A Framework for Professionals”.
A holistic ecosystem is needed
To ensure client interests are best served, investment professionals have the onus to provide ethical leadership and ensure such considerations are imparted to AI-driven solutions. In addition to an ethical decision framework, the organisation’s senior leadership must establish a culture conducive to client-centric AI innovation and collaboration, a robust risk management and governance system, and a set of capacity building infrastructure in focused areas.
Taken together, these elements provide the most supportive ecosystem for AI to be ethically and successfully employed in the investment context.
By Siva Ramachandran, Director of Capital Markets Policy, India, CFA Institute and Phoebe Chan, Capital Markets Policy Specialist, CFA Institute.