Agentic AI is here: Are we ready to govern it?
While agentic AI has exciting potential, organisations must govern its use responsibly.
29 October 2025
By Nimish Panchmatia, Chief Data and Transformation Officer, DBS Bank
The financial industry is on the brink of a profound shift. Over the past few years, AI has evolved from rule-based automation to advanced models capable of simulating human reasoning. Now, with the rise of agentic AI - systems which can independently initiate and execute actions - we are entering uncharted territory.
Agentic AI has huge potential across the finance industry. From intelligent chat agents and fraud detection to autonomous advisory tools and credit decisioning, there are many opportunities to create efficiencies and personalise customer experiences.
However, the integration of agentic AI also raises fundamental questions around oversight and trust. As AI begins to exhibit human-like autonomy, we must consider how banks can use these systems responsibly, and how human oversight should come into play.
What does human-supervised governance look like?
True AI autonomy doesn't mean the absence of control; it means applying better, more strategic control. Human-supervised governance sits at the heart of all our AI deployments to ensure we retain strategic control and oversight. This includes defined escalation paths, audit trails, and fallback mechanisms to ensure decisions remain explainable, accountable, and aligned with intent. In essence, we aren't delegating responsibility to AI, we're reinforcing the responsibility of the people and systems around it so that it can work most effectively within its scope.
The industry is in a unique window of opportunity. McKinsey and Gartner both predict the variety and velocity of the change that is underway with the coming of agentic AI, and its substantive impact on the way it will reshape work. However, it has yet to be embedded in every decision or interaction. This gives us the space to embed the right principles before scale overtakes strategy.
Banks need to be transforming fast in order to keep up with the pace of technological change, but we are hyper-aware that the decisions we make today about what that transformation looks like will define the next decade for financial services. That's why we need to approach rapid transformation with a tried-and-tested framework that ensures both effective and ethical developments.
Embedding responsible AI
At DBS, responsible AI isn't a standalone initiative, it's embedded in how we operate. Over more than a decade of digital transformation, we've built an organisational mindset around trust, transparency, and governance.
The power of AI hinges on the data it uses, which serves as a starting point for good governance. We anchor our AI in our in-house platform called ADA (Advancing DBS with AI). This ensures our AI models are trained on high-quality, well-governed data, a critical first step toward ethical outcomes.
We also developed and adhered to our overarching PURE framework - meaning that all use cases have to be considered in light of whether they are Purposeful, Unsurprising, Respectful, and Explainable - and which governs every AI use case from conception to deployment. Complementing this is our Responsible Data Use framework and a cross-functional Responsible AI Committee, which oversees ethical considerations and risk assessments. Every single approach is considered, overseen and evaluated by our people to ensure it stands up to scrutiny and serves its desired purpose.
Through traditional AI, GenAI and now agentic, we need to bring our people along with them on the journey to ensure they have the appropriate skills to be the 'human' that supervises the different stages of growth and helps drive it responsibly across the bank. We therefore have over 10,000 staff who are on tailored learning roadmaps to prime them for developments in AI and data that are driving the evolution of banking.
Delivering real-world impact
While we continue to explore how agentic AI can be applied within the bank, our overall data and AI approach continues to deliver real-world impact. For example, we use AI and behavioural science to reduce scam losses by detecting high-risk transactions and triggering prompts that encourage customers to pause and reconsider. The AI system monitors transaction patterns such as sudden changes in behaviour and then delivers a carefully designed nudge to interrupt potentially coerced actions. In this case, even though each individual ‘break’ is triggered automatically by AI, the system itself is designed and continuously refined by human experts, including behavioural scientists, fraud analysts, and risk teams.
While sensitive domains like fraud detection and risk assessment are often one of the last areas to embrace AI due to compliance and risk concerns, the humans-in-the-loop in this situation are crucial. They are responsible for defining the thresholds that signal risky transactions, curating the overall messaging and timing of the nudges, and reviewing the outcomes to fine-tune the prompts to avoid customer friction.
Internally, we are implementing tools that reduce toil for our customer service workforce, while improving security through better access to information. Our CSO Assistant GenAI tool acts as a conversational interface to help security teams retrieve relevant information from policies and guidelines. It doesn't just keyword-match - it interprets intent and navigates through documentation to provide relevant and actionable responses. It operates using role-based access, tailoring its responses based on the user's function and permissions.
I believe that responsible AI is more than an ethical obligation, it's a strategic advantage. In financial services, trust is paramount, especially as we enter this era of rapid transformation. Embedding governance, transparency, and human collaboration into AI systems is how we earn and keep that trust. As an industry, we must move forward with clarity and purpose. Let's not wait for regulations to catch up - let's lead responsibly.
This article was originally published on Finextra.