ADVERTISEMENT
Fintech / Regtech
02:19 AM 16th February 2026 GMT+00:00
What Singapore’s AML Leaders Are Learning About AI – the Hard Way
Analysis by Bradley Maclean
ADVERTISEMENT
A closed-door industry roundtable highlights how MAS scrutiny, auditor oversight and data constraints are turning AI adoption from a compliance solution into a more exacting test of execution and accountability.
For much of the past decade, the architecture of anti-money laundering regulation has been broadly settled. The principles are familiar, the language of risk-based compliance is well established, and the expectation that financial institutions maintain robust control frameworks is largely uncontested.
What has changed is how those frameworks are being tested.
In Singapore, a series of high-profile financial crime cases, followed by recent Financial Action Task Force (FATF) onsite inspections, has sharpened supervisory focus on whether AML systems work in practice – consistently, defensibly and under pressure. Increasingly, controls are assessed less by their design than by their outcomes, and less by policy intent than by demonstrable effectiveness.
It is in this environment that artificial intelligence has moved from strategic ambition to operational necessity. Yet discussions at a recent private roundtable hosted by Regulation Asia and Concentrix suggest that AI is not simplifying the compliance challenge. In many cases, it is making it more demanding.
The constraint, participants said, is no longer technological capability. It is governance, data quality and the ability to withstand sustained audit and supervisory scrutiny.
Capacity under strain
Across banks, fintechs and digital asset firms, AML workloads are expanding faster than human capacity can scale.
Continuous KYC, deeper customer risk assessments, higher alert volumes and tighter escalation timelines are colliding with rising expectations around investigative quality and documentation. In stable, mature businesses, institutions can still attempt to align headcount with activity. In more volatile or fast‑growing environments, that approach becomes impractical.
This challenge is particularly acute in digital asset firms. “In traditional financial institutions, you can normally predict volumes and resource accordingly,” said a compliance head at a major cryptocurrency exchange. “With crypto, volumes are not predictable. You can’t just have people on the bench. The only reliable way of managing this is to leverage AI.”
Firms are deploying AI across a widening range of compliance functions – from tracking regulatory communications and triaging transaction monitoring alerts, to supporting sanctions investigations and assisting with the drafting of suspicious transaction reports. One participant described an internal tool capable of extracting data from multiple systems and auto-populating STRs.
The aim, however, is not end-to-end automation. It is targeted relief in the most resource-intensive parts of the workflow.
Automation, with limits
Despite growing reliance on AI, supervisory expectations in Singapore remain clear on one point: accountability cannot be automated.
Participants consistently cited MAS’s insistence on a “human in the loop” approach for any process involving judgment. While risk-based automation may be acceptable for clear false positives, final decisions must remain human-led. “The most important thing MAS is really looking out for is that the final decision is made by a human,” said a senior fintech compliance officer.
This requirement extends beyond decision-making to the development and operation of models themselves. Compliance teams are increasingly required to actively constrain and challenge AI outputs – through prompt design, conservative framing and review processes – to ensure alignment with regulatory expectations. Several participants noted that, left unchecked, AI systems tend toward overly agreeable conclusions.
The practical implication is that AI accelerates processes, but concentrates responsibility. Institutions deploying it are being forced to define ownership, escalation paths and challenge mechanisms with greater precision than before.
Auditors as gatekeepers
While MAS sets supervisory expectations, participants were clear that external auditors now play a decisive role in shaping how far and how fast AI can be deployed. “When the auditor provides a written report to MAS, that’s basically it,” said a leader at a fintech serving startup clients. “At that point, you no longer control the narrative.”
For newer digital players, this dynamic is particularly challenging. Early-stage firms can, by definition, resemble shell entities, with limited operating history and evolving controls. Translating those realities into assurance frameworks that auditors are comfortable endorsing can be difficult.
For firms deploying AI, the challenge is different but no less significant. Complex machine learning models can appear opaque, requiring extensive documentation for testing, validation, governance, and exception handling.
“It’s a very difficult journey,” said a participant responsible for delivering AI solutions at a large bank. Successful deployment, he noted, requires not just a model, but a clearly defined operating framework around it – one that demonstrates end-to-end accountability.
As a result, many institutions are now designing AML operating models with auditability in mind from the outset, rather than treating assurance as a downstream exercise.
Data quality sets the ceiling
If governance defines the rules of AI deployment, data quality sets its ceiling. “A lot of organisations are realising that the AI you implement is ultimately a function of the data you feed it,” said Kush Mukherjee, SEA Practice Lead for Financial Crime & Compliance Services at Concentrix. Where data is skewed or lineage is weak, he said, ongoing validation becomes essential.
These constraints help explain why AI has delivered stronger results in fraud detection than in money laundering. Fraud benefits from clearer ground truth – victims confirm outcomes, allowing data to be labelled and learned from. Money laundering rarely does.
“With money laundering, you never really know,” one participant observed. “You have your STR explaining your suspicions, but most of the time you don’t get confirmation, so you could very well be training your model on a wrong assumption.”
The limitation is particularly evident in sanctions compliance. Rapidly evolving regimes – such as those imposed following the Russia-Ukraine conflict – can outpace machine learning models that rely on historical data. In such cases, rules-based systems and human expertise may outperform AI in the early stages.
Several firms said AI is proving more effective in the post-alert phase of sanctions work, helping analysts map ownership structures and contextualise risk rather than generating the initial alert itself.
No global shortcut
The discussion also highlighted the limits of global standardisation. Even when institutions deploy common platforms across jurisdictions, AI models require local calibration to reflect local language, typologies, national risk assessments, and supervisory expectations. Regulatory attitudes vary widely, complicating efforts to apply uniform controls.
“Not every regulator is open-minded,” said a senior executive at a payments services firm. “Some take a ‘my way or the highway’ approach, which forces difficult trade-offs.
Julien Dumery, Senior Director at Concentrix, commented that agility remains a central challenge. “You have to make the models agile,” he said. “Managing that agility without losing governance is where institutions are struggling.”
Execution becomes the test
Viewed through a Singapore lens, the industry’s AI dilemma becomes clearer. The central challenge facing financial institutions is no longer whether AML frameworks exist, or even whether advanced technology has been deployed. It is whether operating models can deliver consistent, defensible outcomes under sustained supervisory and audit scrutiny.
AI is increasingly indispensable to that task. It is also increasingly unforgiving of weak data, unclear accountability and under-designed processes.
Singapore may be at the leading edge of this shift, but the pressures shaping its AML landscape – capacity strain, outcome‑based supervision and technology‑mediated risk – are no longer unique.
The debate over whether AI belongs in AML is effectively over. The defining question now is whether institutions can deploy it in a way that withstands sustained supervisory and audit scrutiny.
In practice, it is governance, data discipline and clearly defined human accountability – not the sophistication of models – that will determine which firms succeed and which fall short.
--
Disclosure: This roundtable and article were produced by Regulation Asia in collaboration with Concentrix, a global technology and services company with more than 36 years of banking experience and deep financial crime compliance expertise across KYC, AML, screening, fraud and disputes.
Related stories
JOIN OUR NEWSLETTER
An exclusive weekly selection of top stories from the Regulation Asia editorial team.






