ADVERTISEMENT
AML / KYC
06:36 AM 8th May 2025 GMT+00:00
Project Aurora’s Vision for the Future of Financial Crime Prevention
Beju Shah discusses Project Aurora, which explores new ways of combatting money laundering taking a data driven approach.
Reporting by Bradley Maclean

ADVERTISEMENT
Article Summary
Project Aurora, led by the BIS Innovation Hub, aims to combat money laundering through a data-driven approach, leveraging AI, machine learning, and privacy-enhancing technologies, with a focus on collaboration among stakeholders to address regulatory and privacy challenges.
Stakeholders have expressed positive feedback on the initiative, highlighting the need for information sharing and adaptation to evolving threats, while acknowledging significant hurdles such as regulatory constraints and the complexities of synthetic identities in financial crime detection.
The project will emphasise ethical AI use, transparency, and continuous stakeholder engagement, aiming to create a collaborative ecosystem that enhances compliance and drives innovation in combating financial crime across jurisdictions.
This summary has been produced by RegAI.
Regulation Asia sat down with Beju Shah, Head of Nordics for BIS Innovation Hub, to discuss new ways of combatting money laundering taking a data driven approach, by applying AI, machine learning, privacy enhancing technologies and network analysis.
This interview was conducted for the “AML Tech Barometer 2025” report, published by NICE Actimize and Regulation Asia to explore AML and fraud trends based on survey and interview data collected from 172 practitioners in Asia Pacific.
—
Since the completion of Phase 1 of Project Aurora, what feedback have you received from stakeholders and the industry?
Beju Shah: The feedback has been overwhelmingly positive. Since we published our work in 2023, many stakeholders have expressed curiosity and a desire to rethink their approaches to financial crime. There’s a clear recognition that we need to share information and collaborate on analysis.
Stakeholders have highlighted the importance of adapting to the evolving threat landscape, particularly in light of increasing cyber threats and sophisticated fraud schemes. Many have also noted that the timing of our initiative aligns well with the growing urgency for reform in combatting financial crime.
However, everyone acknowledges the challenges ahead. There is a consensus that while the willingness to collaborate exists, practical implementation will require overcoming significant hurdles, including regulatory constraints, data privacy concerns and the need for standardised frameworks across jurisdictions.
This feedback underscores the importance of engaging with stakeholders as we transition into Phase 2 of Project Aurora, where we aim to address these challenges and further refine our collaborative strategies.
Given the evolving threat landscape and the connection between fraud and privacy, how do you view the intersection of these two issues?
Beju Shah: Protecting privacy is essential, but context plays a critical role. We must avoid scenarios where anonymity enables fraudulent activities. The key is to find a balance between leveraging the necessary data to combat fraud and maintaining robust privacy protections.
While privacy-enhancing technologies hold great promise, experiences have shown that they can be misused to legitimise synthetic identities, allowing them to be layered into datasets undetected. Frameworks like GDPR are instrumental in clarifying data usage and fostering trust with users, ensuring we can effectively address fraud without compromising individual privacy rights.
With the complexities of privacy, how do synthetic identities factor into this discussion?
Beju Shah: Synthetic identities complicate matters significantly. While we want to ensure privacy for legitimate users, the challenge lies in identifying and scrutinising synthetic IDs, which can easily blend into legitimate datasets. These identities are often created using a combination of real and fabricated information, making them difficult to detect. This presents a dual challenge: we must protect the privacy of genuine users while also implementing robust mechanisms to flag and investigate suspicious activities linked to synthetic identities.
Moreover, as synthetic identities become more sophisticated, they pose greater risks to financial systems and consumer trust. We need to further explore and develop advanced analytics and machine learning models that can differentiate between legitimate user behaviour and the patterns exhibited by synthetic identities. This will require collaborative efforts across sectors to share insights and data, ultimately enhancing our ability to combat fraud.
In Phase Two of Project Aurora, we will focus on this intersection of synthetic identities and privacy, exploring innovative solutions that could ensure privacy protections while improving our detection capabilities. By doing so, we aim to explore how to create a more secure environment for all users without compromising their fundamental rights.
The ethical use of AI is an increasingly urgent topic, particularly given the fragmented and diverse nature of the global landscape. How does this concept integrate into your framework and ongoing projects, and how do you define ethical use in such a complex industry?
Beju Shah: The ethical use of AI is absolutely critical, especially as we navigate the complexities of a fragmented global landscape. Phase Two of Project Aurora will emphasise foundational elements like explainability and transparency in AI systems.
It’s essential that models are not only effective, but also understandable and trustworthy to users, should they in the future be adopted across different regions and cultures.
Ethical AI is not merely a technical challenge; it’s a societal imperative. We must consider the broader implications of our technologies, ensuring that they uphold fairness and accountability. This involves actively engaging with diverse stakeholders to understand their perspectives and needs, thereby creating frameworks that reflect ethical principles applicable in various contexts.
Only by doing so can we foster trust and ensure that our AI applications contribute positively to society while mitigating unintended consequences.
In engaging with stakeholders, including big tech, how do you plan to incorporate their perspectives into your initiatives?
Beju Shah: We have organised a series of workshops that will bring together banks, market infrastructures and public sector agencies. These sessions are designed to foster open dialogue and collaboration, allowing stakeholders to share their insights, challenges and best practices. By creating a space for these discussions, we can ensure that our initiatives are informed by a wide range of perspectives.
After these workshops, we will seek input from private sector technology experts. This step is essential because technology is pivotal in shaping how we approach compliance and combat financial crime. We want to explore innovative solutions that can enhance our capabilities—whether it’s through advanced analytics, machine learning or other emerging technologies.
Furthermore, we will continue to prioritise ongoing collaboration beyond these initial meetings. Establishing a continuous feedback loop will allow us to adapt our strategies as the landscape evolves. By integrating the perspectives from both the financial and tech sectors, we can develop holistic solutions that not only address compliance requirements, but also drive broader industry transformation.
Our goal is to create an ecosystem where technology and finance work hand in hand to improve security and trust while fostering innovation.
What progress do you see in national and cross-border initiatives?
Beju Shah: We’re witnessing a growing recognition of the importance of cross-border collaboration, which is encouraging. Many countries are making notable strides with national fraud portals and AML solutions, marking a crucial step forward. However, developing a cohesive cross-border strategy presents significant challenges.
To truly enhance our efforts, we need to foster deeper cooperation among supervisors and central banks. This involves looking at regulatory frameworks and more effectively sharing best practices and data. While the progress at the national level is promising, cooperation, like at the BIS, builds on this foundation to develop strategies that tackle the complexities of global financial crime. Achieving this will require ongoing dialogue, trust-building and a commitment to collaborative solutions across jurisdictions.
Given the different jurisdictions – the Nordics versus others – are you seeing commonalities in high-value use cases?
Beju Shah: Yes, money muling and the cyber threat landscape are consistent themes. TBML is also a major concern. Money muling is tough because the intent isn’t clear from basic data; it requires transaction purpose and other identifiers.
Our Raven project ties into this too. It’s focused on exploring cyber resilience via public-private partnerships, but it’s not separate from financial crime and cyber threats. These worlds are converging – but as I see it – cybercrime is financial crime.
How do you perceive the current state of proof of concepts (PoCs) in this space?
Beju Shah: Proof of concept fatigue is indeed a significant concern. While PoCs can be a valuable tool for testing ideas and technologies, they must be approached with a clear purpose and defined objectives. Conducting PoCs without a strategic focus can lead to stagnation and wasted resources.
It’s crucial to implement regular check-ins throughout the PoC process to assess progress and ensure that each initiative is steering us toward actionable solutions. By setting specific goals and measuring outcomes, we can maximise the effectiveness of our PoCs and ensure that they contribute meaningfully to our overall strategy.
This proactive approach will help us avoid fatigue and foster innovation in a more targeted and impactful manner.
—
This interview was conducted for the “AML Tech Barometer 2025” report, published by NICE Actimize and Regulation Asia to explore AML and fraud trends based on survey and interview data collected from 172 practitioners in Asia Pacific. Download the report here.
Related stories
JOIN OUR NEWSLETTER
A daily selection of top stories from the Regulation Asia editorial team