Agentic AI refers to artificial intelligence systems that can independently plan, make decisions and take action to achieve specific goals. Unlike generative AI, which must be guided with prompts to create a given outcome, agents use reasoning and external tools to proactively complete tasks.
Organisations in financial services are fairly cautious of agentic AI, due to the constraints that need to be built around it to avoid widespread risk. But compliant agentic AI is possible, and we’re going to explore how you can use it to maximise the benefits while adhering to strict regulatory requirements.
Key Takeaways for Agentic AI in financial services:
- Agentic AI systems are autonomous decision making networks that can handle complex tasks
- Financial institutions can implement an agentic workflow for operational efficiency, improving the customer experience and intelligent automation
- However, regulatory compliance and risk management are two big considerations before replacing traditional AI (gen AI) with agentic technology
How does Agentic AI work?
Agentic AI often works like a chain, with multiple ‘agents’ at work. Each agent has its own purpose in the chain, given a set of potential capabilities. It chooses the best-suited action to achieve its goal.
In financial services, these chains can become a weakness if they pass information without context. It’s like asking a financial advisor to automatically make investments on your behalf, and giving them your income levels without your spending. They’d have no idea how much surplus you have to invest each month!
Therefore, in the financial sector, Agentic AI works better when agents do not make their decisions based on a single source of truth (like the last agent’s decision), but the full picture.
Let’s imagine how Agentic AI could change the pension consolidation process. It’s a notorious compliance headache, largely due to data fragmentation. Fortunately, Agentic AI simplifies the process by orchestrating the whole search-and-transfer workflow.
Take a look at difference it can make:
| Pensions consolidation | Traditional workflow | Agentic AI workflow |
| Identity verification | A team member manually checks the customer’s National Insurance number against your own legacy databases. If the customer has “lost” pots with other providers, your staff often has to coach the customer on how to find old policy numbers or contact details for former employers. | This agent uses verified identity data to autonomously query the Pensions Dashboards ecosystem or common industry ‘find-my-pension’ APIs. It can reason through name changes (like marriage) or address mismatches to find matches that a simple database search might miss. |
| Letters of Authority (LoA) | This can be the biggest bottleneck. Your team must draft, send, and wait for a signed LoA from the customer. Once received, you manually post or email this to the other pension providers to prove you have permission to request data. You are now at the mercy of the other provider’s backlog. A staff member often has to call or email these providers repeatedly to follow up on the transfer value and benefit specification. | Once a pot is found, this agent doesn’t wait for a human to send a letter. It triggers a secure digital Request for Information (RFI) via an API. If the other provider is slow to respond, the agent is programmed to nudge their system at set intervals without any human intervention. |
| Data analysis | Once the data eventually arrives (often as a scan of a paper letter), an admin must manually type those figures into a spreadsheet or your internal system to compare the old pot against the new one to ensure suitability under Consumer Duty. | As data returns (even in unstructured formats), this agent uses Natural Language Processing to extract the key Benefit Specifications. It automatically reveals red flags, like a safeguarded benefit that might make consolidation unsuitable, and maps it against your firm’s risk framework. |
| Review and approval | You, as the manager, then have to review this manual spreadsheet, hoping no typos were made during the data entry phase, before approving the transfer. | This agent reviews the work of the other three agents. If the transfer is straightforward and meets all suitability criteria, it prepares the final transfer pack for the customer to sign digitally. It only alerts the Admin Manager if it detects a complex legal conflict or a high-value ‘protected benefit’ that requires a human expert’s judgment. |
This example shows the power of Agentic AI in problem-solving without the need for human intervention and in a fraction of the time. Agents go beyond following simple logic-based rules and can take in feedback to adjust their behaviour, making them adaptable to various contexts.
What’s the difference between Agentic AI and Generative AI?
The fundamental difference is that Generative AI is designed to create content and identify patterns, whereas Agentic AI is designed to independently reason, plan, and execute complex workflows.
While Generative AI might summarise a suspicious communication for a compliance officer, Agentic AI acts as a proactive ‘force multiplier’ that can autonomously route that risk for review or track content across massive datasets to surface what matters most. This shift moves the technology from a creative assistant that responds to prompts into a goal-oriented worker capable of managing end-to-end tasks with transparency.
What are the benefits and considerations for Agentic AI in financial services?
Agentic AI brings an autonomous shift in the financial services industry, but this must be carefully balanced against risk frameworks and guardrails. Even the FCA is focused on reviewing the long-term impact of AI, in order to offer clear recommendations going forward.
Benefits
The key benefits of Agentic AI in finance are:
Adaptability
The outcome, or path of work is not set. Agents can be flexible based on environmental conditions or the specific goal, which means they can work around problems without the logic breaking.
During a lending affordability check, agents might receive a thin credit file which doesn’t meet their criteria. Instead of automatically rejecting the applicant, an agent will request Open Banking access to analyse the last two years of transaction data to verify income and spending against the criteria.
Dynamic paths
Instead of rigid coding, an AI agent itself can select the correct agents for the task. This gives it the ability to grow more efficient over time, and even remove redundant capabilities if required.
For example, agents who have been tasked with hitting the £20,000 ISA limit might transfer £1,666 into an ISA in May at the start of the tax year. But if the agent realises it’s March and there is still £5,400 of remaining ISA allowance, it could transfer the entire amount if the individual’s finances allow for it.
Automation
Instead of your system crashing or pointlessly retrying tasks during a peak-time surge, agentic AI simply pauses non-essential work. It choreographs its own schedule, deferring background maintenance until things quieten down, so you never waste processing power on a ‘fail-and-retry’ loop.
If your servers are at 99% capacity and trying to process urgent mortgage applications while also performing data cleansing (non-urgent), it will prioritise the mortgages to prevent a system lag.
Considerations
Financial services is a very cautious industry, and agentic AI introduces significant risks like hallucinations, and the potential for a ‘black box‘ effect where the logic behind a financial outcome becomes difficult for a human manager to audit in real-time.
Moneyhub Solutions Architect, John D., holds a degree in Artificial Intelligence and Robotics. He has been working with different AI iterations since he first joined Moneyhub in 2020. He walks us through some key considerations for users of Agentic AI in financial services:
- Preventing hallucination
- Black box explainability
Preventing hallucination
“Outside of the financial industry, AI is praised for its ability to create new content based on patterns and probabilities. Its randomness and creativity is a feature, not a problem”, says John. “But in the financial services sector, there is a zero failure tolerance, so we need utmost certainty that a given outcome is the correct one.”
Imagine an AI agent seeing a raw text string like ‘Zettle Sunny Days’ – it guesses it might be a holiday expense. If it’s wrong, it hallucinates a luxury spending habit and unfairly rejects the loan.
Firms must consider how to prevent hallucination with certainty to avoid the reputational and regulatory risks of giving unsuitable advice, taking the wrong actions and basing life-changing decisions on guesses.
John suggests grounding your decisions in verified data, as it’s what we’ve had success with in our Categorisation and Enrichment Engine. This involves ensuring that the raw data that agents make their decisions on is completely accurate and verified. He adds:
“Agents don’t have to guess spending patterns and income frequencies based on raw text strings, instead, they reason based on enriched, verified metadata”.
Black box explainability
“In a typical automated system, you can look at the code and see exactly why a decision was made. It’s a transparent ‘if-this-then-that’ logic,” John notes. “But the ‘black box’ nature of some AI models means they can reach a conclusion without a human being able to see the working out. In a highly regulated environment, ‘the computer said so’ simply isn’t a valid answer.”
Imagine a pensions agent autonomously deciding to defer a customer’s pot consolidation. Without explainability, a manager can’t tell if the agent pivoted because it detected a complex safeguarded benefit or if it simply hit a technical glitch.
Firms must consider how to solve the black box problem to avoid the reputational and regulatory risks of failing audit requirements, losing the trust of their customers and being unable to justify automated decisions to the FCA.
John suggests using consent to build in extra layers of security, ensuring that you’re not just relying on AI to be nice, but that it’s cryptographically prevented from being naughty. Consider using code-based guardrails to enforce explainability parameters in order to maintain transparency throughout the system. John adds:
“We don’t just want the final result; we want the full audit trail. By forcing agents to show their work, managers can move from the manual processing grind to high-level oversight with total confidence“.
Want to learn more about the AI guardrails you could implement as a financial organisation?
Agentic AI presents a huge opportunity to the financial services industry, and firms that take full advantage of new workflows will see significant efficiency benefits. But as a risk-first industry, we’re all aware of the importance of controlled, calculated innovation.
If you’d like to learn more about architecting trust in the era of autonomous finance, John has authored a whitepaper dedicated to the security considerations, ethics and techniques you can implement. Moneyhub takes a ‘security by design and ethics by default’ approach, and will walk you through how to bulletproof your logic against the key risks as you roll out agentic AI.
It’s coming soon. But if you want early access, contact the team here.
FAQs
Traditional AI typically responds to specific prompts or follows fixed rules to generate content, whereas agentic AI can independently plan, use tools, and execute multi-step tasks to achieve a high-level goal. While standard AI acts as a digital assistant waiting for instructions, agentic AI functions more like a digital employee capable of self-correction and reasoning.
ChatGPT is primarily a generative AI, but it exhibits agentic properties when it uses features like Advanced Data Analysis or specialised GPTs to browse the web and run code. As the platform evolves with reasoning models and tool-use capabilities, it is moving further away from simple chat and closer to a fully agentic system.
Financial institutions use agentic AI to automate complex workflows like autonomous fraud response, where agents don’t just flag a transaction but independently freeze the account, notify the customer, and initiate a recovery ticket.
Written by a human:
John D is currently a Solutions Architect at Moneyhub, having previously served as the company’s Lead of the Machine Learning and Data Engineering team. John values the opportunity to use “tech for good,” and is particularly energised by projects that protect vulnerable users, such as his collaborative work exploring how financial data can be used to identify and support individuals affected by gambling harm.
Outside the world of Open Finance, John is a Director and organiser at Herofest, a major Live Action Role-Playing (LARP) event.
share