Navigating Hong Kong PCPD’s Model AI Framework

Albert Yuen and Jasmine Yung at Linklaters discuss the Model AI Framework and its implications for the financial services sector.

Artificial intelligence (AI) is revolutionising the financial services industry, as financial services institutions (FSIs) increasingly use AI to automate processes, tailor and develop new financial products and services, increase cost efficiencies and improve productivity. We are already seeing FSIs deploying (or planning to deploy) AI in diverse functions from handling customers’ queries via AI-powered chatbots (or AI tools to help customer service staff handle customer queries), assist with credit assessment, AML, wealth management, fraud detection, cheque processing and cybersecurity management.

With the rapid development of AI, regulators across sectors and jurisdictions are grappling with assessing both how to approach AI regulation (including whether current laws are adequate or need review and refinement or whether new AI-specific laws need to be developed) as well as how to best guide and help organisations develop AI frameworks and compliance regimes who are considering or implementing AI in their businesses.

Against this background, Hong Kong’s privacy regulator, the Office of the Privacy Commissioner  for Personal Data (PCPD), published the “Artificial Intelligence: Model Personal Data Protection Framework” guidance (Model AI Framework), as a comprehensive best practice practical guidance for organisations (including FSIs) who are procuring and using AI solutions (including generative AI and predictive AI) to comply with the Hong Kong data privacy law requirements (the Personal Data (Privacy) Ordinance (PDPO)).  We assess what this Model AI Framework is and the key implications for FSIs below.

Who does the Model AI Framework affect?

The Model AI Framework targets organisations (including FSIs) operating in Hong Kong SAR, who procure and implement or use AI solutions from third party suppliers in their business and process personal data in operating or customising AI systems. For organisations developing their own in-house AI systems using personal data, the existing Guidance on the Ethical Development and Use of Artificial Intelligence published in August 2021 (2021 AI Guidance) continues to apply to them.

Both the Model AI Framework and the 2021 AI Guidance are intended to help organisations align with data stewardship values and ethical principles for AI[1]. For FSIs who develop their own in-house AI solutions but also procure and use AI solutions from third parties, they will need to comply with both the 2021 AI Guidance and the Model AI Framework, albeit the Model AI Framework builds off the 2021 AI Framework so the additional compliance burden is incremental.

Why is such a Model AI Framework considered necessary?

The PCPD acknowledged that AI technology advanced rapidly and the application of AI had become increasingly prevalent, so having more detailed guidance to help organisations address the challenges posed by AI to personal data privacy issues was considered needed. There was also recognition that the 2021 AI Guidance, which was aimed at organisations developing its own AI solutions in-house, was not the industry practice.

Instead, the majority of organisations (including FSIs) procure and implement (including customise) third party AI solutions, services, systems so the Model AI Framework sets some clear expectations and practical guidance around accountability and responsibility when using and implementing AI solutions using personal data in those contexts.

This is in line with the trend by regulators in jurisdictions such as the EU, mainland China, Singapore and Japan in giving more practical guidance and recommendations on how organisations should be using AI solutions and more practical ways in how existing data privacy frameworks apply to this new adopted technology.

Is the Model AI Framework legally binding?

The Model AI Framework is a best practice guidance and recommendations document to help organisations set up adequate AI governance and compliance frameworks when using and implementing AI solutions concerning personal data. As such, it is voluntary and not legally binding on organisations.

However, the PDPO is legally binding on organisations operating in Hong Kong who collect, use, store and transfer personal data, and the Model AI Framework in various areas highlights the Hong Kong privacy regulator’s views and expectations on how organisations (such as FSIs) can address PDPO issues as part of the best practices set out in the Model AI Framework, so there is  value for FSIs to review and assess how the Model AI Framework matches their existing AI governance and compliance frameworks.

Key summary of the Model AI Framework

Organisations (including FSIs) are recommended to adopt measures in the following four areas when they procure, implement and use AI solutions to mitigate risks from use of AI systems:

  1. AI strategy and governance
  • Establish an AI governance strategy – FSIs should put in place an AI governance strategy which drives ethical and responsible AI procurement, implementation and use of AI within the organisation. The strategy should be reviewed and updated based on stakeholders’ feedback and communicated to all stakeholders.
  • Establish diverse AI governance – FSIs should establish an AI governance committee with sufficient resources, diverse expertise and sufficient authority (e.g. led by C-executive level personnel such as a CEO, CIO, CTO, CPO) to steer the AI strategy implementation with clear division of roles and responsibilities.
  • Provide adequate AI training – FSIs should provide AI training to relevant personnel to ensure that AI-related policies are properly applied. PCPD recommends that a Privacy Management Program (PMP)’s data privacy protection training could be modified to also cover the procurement, implementation and use of AI systems and personal data.
  • Develop and consider AI solution procurement policies and diligence – FSIs should consider a list of governance issues in procuring AI solutions, including key privacy and ethical obligations to be conveyed to potential AI suppliers, international technical and governance standards that AI suppliers should follow, and evaluation of AI suppliers’ competence during due diligence.
  1. Risk Assessment and Human Oversight
  • Conduct risk assessment and mitigate risks – FSIs must have a risk assessment system to identify, analyse and evaluate the AI risks throughout the lifecycle of the AI system and adopt mitigating measures proportionate to the risks identified to reduce risks. Risks which cannot be eliminated should be communicated to individuals.
  • Assess privacy risks and broader ethical impact – Risk assessments should be conducted during procurement or when significant updates are made to AI systems. Outcomes should be documented and reviewed in line with AI governance policies. FSIs should consider data privacy risks in the full data collection, use and handling cycle, and consider the broader ethical impact of the use of AI systems on rights, freedom and interests of individuals, the organisation and the wider community.
  • Adopt human oversight proportionate to the risks identified – For AI systems which affect individuals’ legal rights, including AI systems which assess the creditworthiness of individuals or provide recommendations to individuals, FSIs should consider adopting human oversight[2] be proportionate to the AI risk profile identified.
  • Obtain information from AI suppliers – FSIs should understand from AI suppliers whether and how human reviewers have been involved on training and developing AI models to reduce adverse risks on individuals, and request AI suppliers to provide explanations about AI output to enable their effective performance of human oversight in using AI system.
  1. Data preparation for Customisation and Use of AI
  • Assess data governance practices – FSIs should consider the data governance practices of AI suppliers and the sources of the training data to ensure that the legality, integrity, robustness, and fairness of AI solution and the output they generate.
  • Comply with PDPO when preparing datasets – They must comply with the PDPO when preparing datasets for customising and using AI solutions procured from AI suppliers, including minimising personal data collected and used, ensuring consents have been obtained from individuals, ensuring the reliability and fairness of data used and documenting the data handling processes to ensure quality and security of data.
  • Validate and test AI systems – FSIs must ensure the AI vendors / developers validate, test and secure the AI systems before and after deployment. FSIs which implement AI solutions with open-source elements should observe industry best practice in maintaining code and managing security risks.
  • Secure AI systems – Depending on the level of risks, FSIs should take security measures to ensure AI system is robust, manage and continuously monitor AI systems, e.g. establish mechanisms to ensure traceability and auditability of AI systems’ outputs.
  • Incident response plan – FSIs are also advised to prepare an incident response plan which outlines the processes of identifying, containing, investigating and recovering from AI incidents.
  1. Communication and engagement with stakeholders
  • Be transparent about AI uses – Apart from complying with PDPO in preparing customer-facing disclosures, FSIs are encouraged to be transparent about their use of AI systems, and disclose the results of their risk assessment when communicating with staff, customers and regulators.
  • Handle data access and correction requests – FSIs must handle individuals’ data access and correction requests in relation to personal data processed in AI systems according to the PDPO (including collaborating with AI suppliers).
  • Provide feedback channels to individuals – Where possible, FSIs must provide channels for individuals to provide feedback, seek explanation, request human intervention or opt out from using AI where possible.

Key challenges and implications for FSIs

While the Model AI Framework serves as a comprehensive guideline (or reminder) for FSIs to take into account in formulating new or uplifting existing AI governance frameworks and compliance practices, there are some key challenges and implications for FSIs.

  • Compliance with the Model AI Framework – Some FSIs, especially multijurisdictional ones, may find it challenging to fully comply with the complexity (and at times very detailed and even prescriptive) approach to the Model AI Framework. While there is an acknowledgement by the PCPD in the guidance that organisations should incorporate the recommended AI governance frameworks within existing governance frameworks already established by the organisation, even this approach requires some time, effort and significant investment costs given the broad scope potentially impacted (to review, adapt and uplift), including areas such as technology, data privacy governance, procurement, third party vendor management frameworks and the like. Some FSIs may approach compliance and adoption of the Model AI Framework recommendations based more around the extent that it relates to personal data and to the extent strictly required under the PDPO given the Model AI Framework makes recommendations and provides guidance beyond mandatory data privacy issues related to procuring AI solutions.
  • Adopting AI Frameworks that Maximises Flexibility – Given the AI regulatory and compliance landscape is rapidly evolving not only in Hong Kong, but regionally and globally, large multinational organisations want to adopt governance frameworks for using AI solutions that maximise flexibility, and can quickly adapt to local specialised AI nuances. There are also concerns that adherence to prescriptive, detailed rules, and guidance might limit an FSI’s ability to innovate, or quickly adopt AI solutions to meet market and customer changes. The challenge is to highlight the common AI frameworks and key themes, and where required, incrementally adapt existing AI frameworks and practices as needed. Given the Model AI Framework and PCPD has consulted broadly and claimed to have followed international best practice approaches and guidance, hopefully FSIs will find many of the recommendations are already in place or contemplated. However, there will be an element of judgment applied by FSIs in assessing its compliance with the Model AI Framework, because while the Model AI Framework aims to provide a range of examples and practical processes that could be adopted, it doesn’t highlight priority nor does it guide in terms of what examples are best for higher AI risk solutions versus others.
  • Assistance from Third Party AI suppliers – With the recognition that AI solutions are being increasingly procured and customised from third party AI solution service providers, there are now recommendations to seek further information (including due diligence) and assistance from third party solution service providers, including in terms of explainability of their development of AI systems and datasets and the like. While some of the larger, global AI solution service providers are already preparing and adapting to their enterprise customer concerns and requests, guidelines such as the Model AI Framework will likely ensure that FSIs will seek further information and assistance from third party AI service providers to assist them with compliance of the Model AI Framework. As FSIs procure with more niche, specialised, fintech AI third party service providers from time to time, there may be challenges and variances with getting the information and assistance across the different AI third party service providers.
  • Increasing regulatory scrutiny on AI-related solutions – Notwithstanding the Model AI Framework being a non-binding, voluntary guidance, it does come at a time of increased regulatory scrutiny over compliance issues with implementing and using AI solutions. FSIs should be mindful that in early 2024, the PCPD completed compliance checks on 28 organisations regarding the implications of the development or use of AI across a number of sectors including financial services and insurance, and issued a list of recommended measures that are consistent with these Model AI Framework recommended best practices. With the publication of the Model AI Framework, there is the risk of greater or more frequent similar PCPD investigations. We await to see how the PCPD will monitor or enforce compliance with these guidelines, and whether non-compliance of the recommendations by FSIs who utilise personal data in implemented AI solutions will give rise to a presumption against such FSI in any data privacy investigation or compliance check initiated by the Hong Kong privacy commissioner. It is hoped that the Model AI Framework will be enforced in a way which is flexible to enable FSIs to continue to innovate and pragmatically use AI in its operations.

Next steps – what should FSIs do?

FSIs can undertake a number of steps in response to the Model AI Framework, including:

  • Reviewing existing data governance, accountability, ethics, procurement and risk management frameworks and related policies The Model AI Framework encourages organisations (including FSIs) to implement the governance frameworks within existing organisational structures. FSIs should undertake a governance mapping exercise to assess key compliance gaps as well as review (and if needed, uplift) existing artefacts, policies and procedures for AI solutions procurement and use in order to achieve compliance.
  • Third party AI supplier assistanceWhere assistance is required from third party AI developers, FSIs should review their contracts to ensure that these requirements are reflected. Additional processes and questions may need to be built into FSIs third party vendor procurement processes for AI solutions.
  • AI solutions mapping and AI and privacy risk assessments – As an initial step, FSIs should understand the extent, locations and nature of use of AI in their operations, including whether it is procured via third party AI vendors or developed in-house. This requires an AI solutions inventory and mapping of key risks against such usage in terms of privacy and AI risk assessments, in particular for large projects involving AI solutions and personal data.

As FSIs embrace an increasing patchwork of regulatory rules in Hong Kong and multiple jurisdictions, it is increasingly crucial for FSIs to closely monitor these updates and maintain an AI governance framework which allow them to fit in new regulatory requirements using a standardised and versatile governance framework.

By Albert Yuen, Counsel & Head of Hong Kong Technology, Media & Telecoms; and Jasmine Yung, TMT Associate – both at Linklaters in Hong Kong.

 

[1] The three data stewardship values are being respectful, beneficial and fair to stakeholders. The seven ethical principles are (1) accountability, (2) human oversight, (3) transparency and interpretability, (4) data privacy, (5) fairness, (6) beneficial AI, and (7) reliability, robustness and security.

[2] FSIs are advised to exercise appropriate human oversight in the form of “Human-in-the-loop” for high-risk AI systems, “human-out-of-the-loop” for AI systems with minimal or low risks and “human-in-command” approach for AI systems where risks are not negligible and “human-in-the-loop” approach is not cost-effective or practicable.

 

 

To Top
Share via
Copy link
Powered by Social Snap