Philip Keller outlines key considerations for banks when establishing a protocol to ensure data can only be shared in a safe, reliable, timely, and complete manner.
Advancements in technology have enabled banks to employ data in ways that benefit both the customer and the bank. These advances have also resulted in multiple challenges forcing banks to turn their risk management focus towards areas such as data ownership, privacy, and confidentiality. Yet, the combination of legislation, regulations, and ethics are often difficult for banks to consider in aggregate.
Despite the challenges, bank executives need to hit the accelerator and keep up with the data-driven digital transformation the industry is heading for. The best and safest way to achieve this, is to establish a data sharing protocol to empower new partnerships and platforms that can provide benefits to stakeholders.
Be warned, defining data is a challenging task for those seeking a common understanding that applies in all situations and jurisdictions. For example, Hong Kong defines data in Legislation Cap 486 as “any representation of information (including an expression of opinion) in any document, and includes a personal identifier”.
In contrast, the EU’s proposed European Data Act defines data as “any representation of acts, facts or information and any compilation of such acts, facts or information, including in the form of sound, visual or audiovisual recordings”. Meanwhile in Switzerland, legislation established for personal data via DSG, SR 235.1 defines data as “all information relating to a specific or identifiable person”, without specifying what may be out of scope, leaving one to assume it covers any type of data.
In a single jurisdiction, the challenge of understanding what constitutes data in general, and the lack of precision in any defining terminology, can still create problems, particularly for banks operating across multiple jurisdictions. To address cross-jurisdictional differences in how data is defined, a principles-based approach for data sharing can be helpful.
Establish a data sharing protocol
A data sharing protocol represents a set of codified steps to enable, maintain and discontinue data sharing with third parties. If not followed properly it should lead to an “Error 418″ scenario. (The Error 418 error response code was developed by the tech community as an April Fool’s to describe a server malfunction scenario where the server is unable to “brew coffee” because it was configured as a “teapot”.)
The foundational goal of a data sharing protocol is to promote data awareness across an organisation and ensure only necessary data is being shared – in a safe, reliable, timely, and complete manner.
First, a firm must identify the criticality of the platform or repository used for housing data, followed by an assessment of the corresponding service level support. Knowing what type of Service Level Agreement (SLA) a platform is subject to is important information. The documented standards captured in the SLA help to set expectations for the performed services and quality standards of deliverables under various scenarios. Regulators such as the HKMA specify further requirements for technology outsourcing depending on the criticality of the services provided and the need to safeguard them against unexpected disruptions.
While every organisation will have its own policies which specify the data classification levels that should be assigned to different types of data, these can differ based on jurisdiction, regulatory definitions and scope. Firms should identify such discrepancies and determine how any such misalignment reverse-translates to the corporate classification levels. This will help synchronise a firm’s policies with the level of required care mandated by a regulator. Still, firms should internally debate and question whether the regulatory expectation is indeed sufficient as a comfortable minimum, considering data privacy and any other relevant legislation.
Specify third-party data
The constraints around the permitted use of external and third party data (i.e. generated meta data, or newly-created data) associated with specific sets of data from origination platforms needs to be established, to avoid the risks of potential licence infringements. The reverse, i.e. data associated with destination platforms (where data will flow to), needs an equal level of care. Specifying the permitted use cases for data association with any third-party is important to protect the data. Having said that, determining the type of data ownership and intellectual property rights of each party within a target platform needs to be established as well.
A minimum for a maximum
A good business case will assist with identifying necessary data. It is relevant to understand the role data has to play, and how it will help to achieve the strategic goal in a proposition. Ideally, the data points should be closely aligned with the underlying business priorities, and speak to specific business performance metrics. A proof of concept can help to balance expectations to an initial hypothesis. A good way to do this is with a playbook that includes different scenarios to stress test the business case assumptions.
Make it secure
Encryption helps to keep data safe, but Shattered.io gives us a reality check, providing a description of how the famous hash function SHA-1 has been broken in practice. This highlights that data protection is a continuous process, involving considerations that range from encryption standards, software vs hardware encryption methods, and other security considerations. In the end, limitations to securing data will also vest in the protection capability third-parties can offer, or enable on time.
Purging the data
Having clarity about the process of removing any data stored with a third-party, besides contractual provisions, is essential. A key outcome measure will need to be defined amongst the contracting parties, where they will agree and accept the conditions under which data is indeed deemed to have been removed.
Firms should specify some minimum principal rules and governance models a party is expected to follow, aligned with the various data classifications of the organisation. Contractual provisions should be templatised and made known to internal stakeholders. Such artefacts will help in two ways: first, they can serve as an internal checklist in the contract drafting process; and second, they can help to manage stakeholder expectations on non-negotiable provisions within contracts.
The sharing of data should be subject to an internal approval process in the context of a bank’s structure, considering the various responsibilities of different functions, committees, and business unit leaders. Governance should consider the various lifecycle stages, i.e. onboarding, relationship maintenance, and departure. In any case, a clear chain of command with responsibility and accountability is necessary to help establish healthy corporate synergies dedicated to achieving successful outcomes – where a good balance of risk and reward is considered in depth.
Philip Keller is a Wealth Strategy and Transformative Projects Expert, as well as a contributor and advisor to Regulation Asia.