A fundamental shift is underway in financial services, moving past basic generative AI tools toward agentic AI.
While various forms of AI agents are already being deployed across industries, their use in the highly regulated financial services sector is now attracting increasing attention and investment.
This is not an incremental update; it is a different kind of capability—one that raises profound questions for financial institutions about how to manage governance, accountability, and scale.
What agentic AI actually means
Most of the AI conversation in financial services has centered on generative AI: tools that answer questions, surface content, summarize documents, draft communications. That layer has real value. But it is fundamentally passive—it responds when asked.
Agentic AI is different in that it can take sequences of actions across systems, adapt based on what it finds, and complete multi-step tasks without a human initiating each step. In a financial guidance context, that distinction matters.
A generative AI tool can answer a participant's question about their contribution rate. An agentic system can identify that a participant hasn't adjusted their contribution since their salary increased 18 months ago, recognize that they're within a rebalancing window, and surface a personalized prompt at the moment they log in—without waiting to be asked.
That is a different architecture for how guidance gets delivered and a different set of questions for the institutions considering it.
For us at Addition Wealth, we think of this as proactive financial guidance vs. reactive.
Where AI stands in financial services today
NVIDIA's 2026 State of AI in Financial Services survey found that 65% of financial services firms are now actively using AI, up from 45% the prior year. Of those, 89% reported meaningful contributions to revenue or cost reduction. And 42% are currently using or evaluating agentic AI specifically, with 21% reporting full deployment.
At the same time, 62% of AI initiatives across sectors remain in pilot or development phases, with only 7% of financial institutions having scaled AI across their full enterprise.
A March 2026 survey of 650 enterprise technology leaders found that 78% of enterprises have agentic AI pilots underway—but only 14% have reached production scale.
The picture that emerges is one of genuine momentum alongside genuine caution. Both make sense. The technology is moving quickly, and the environments financial institutions operate in (regulated, trust-dependent, high-stakes) make thoughtfulness and precaution appropriate.
The opportunity in financial guidance
The case for agentic AI in financial guidance comes down to a structural gap that has always existed, but is now possible to address differently.
Research from The Financial Brand found that 80% of consumers expect their primary financial institution to help them improve their financial health, yet only 14% believe that is actually happening. That gap is not new but the underlying reasons are familiar.
Human-delivered guidance does not scale linearly. The average financial advisor manages between 50 and 150 clients. In the workplace and retirement space, a single plan may serve hundreds of thousands of participants. The people with the largest balances or most active relationships have tended to get the most attention. Everyone else receives a more standardized experience—not out of neglect, but because the delivery model has structural limits.
What agentic AI introduces is the possibility of reaching more people with guidance that is actually responsive to their situation. KPMG's analysis of agentic AI in wealth management found that 70% of respondents anticipated using it to deliver personalized guidance that was previously available only to high-net-worth clients. EY's 2025 survey found that 78% of wealth management firms are already exploring agentic AI for deeper operational use.
From the participant side, there is openness too. Invesco's Defined Contribution Participant Pulse Survey found that 55% of respondents would trust an AI-powered tool to manage their retirement investments—more than they would trust a family member. The condition they named was transparency: they want to understand how decisions are being made.
Building responsibly with AI
One of the most important things to understand clearly is what agentic AI can and cannot do in a financial guidance context.
AI systems can identify patterns, surface relevant information, personalize timing and content, and flag moments that warrant a conversation. What they cannot do (and should not be positioned to do) is replace the judgment and accountability of a licensed financial professional.
Regulatory frameworks are being updated to reflect this. The SEC has set AI rules and compliance as a 2026 examination priority, with particular focus on conflicts of interest in investor-facing AI applications. Hogan Lovells' analysis of the regulatory landscape notes that the path forward is governance design—building systems where human oversight is genuine.
The distinction matters in practice because a financial recommendation generated by an AI system is still the responsibility of the advisor or institution that deploys it. The AI surfaces something, but the professional is accountable. Institutions working through agentic AI implementation are finding that the most productive design question is not "how much can the AI decide?" but "where does the AI's role end and the advisor's begin?"
Northwestern Mutual's 2025 Planning and Progress Study offers a useful signal here: 47% of Americans said they would prefer to work with a financial advisor who understands and actively uses AI. Not AI instead of an advisor—AI in the hands of one. That framing reflects where participant expectations appear to be settling.
The technology is here. So is the complexity.
AI is not a future consideration for financial services. It is a present one, but it is also worth understanding why the pace has been measured.
The primary reason is rigorous compliance and general caution. However, even beyond any regulatory hurdles, implementation is incredibly difficult because of legacy infrastructure. Financial institutions carry decades of accumulated systems, and connecting agentic AI to those environments in a way that is reliable, auditable, and scalable takes significant engineering resources and effort. There are also open questions around data governance, model explainability, and how to supervise AI behavior at scale—challenges that lack simple, off-the-shelf solutions.
And then there is the human side of implementation. AI systems that surface financial guidance are only as useful as the advisor ecosystem around them. PLANADVISER's 2026 analysis found that 94% of industry panelists agreed retirement platforms would move toward AI-generated, hyper-personalized content—but the same conversations consistently return to the importance of the human access point. Speed of deployment matters. So does the quality of what gets built.
The institutions further along tend to be moving with a sense of what the AI is for, where human judgment fits in, and how they will measure whether it is working.
A moment worth watching closely
Agentic AI in financial is an evolving one and the institutions working through it are doing so with stakes on both sides: the risk of moving too fast without adequate governance, and the implications of moving later rather than sooner.
What seems clear is that the institutions finding their footing are the ones treating this as a design problem as much as a technology one, thinking carefully about where AI belongs in the guidance relationship, where humans remain essential, and how to build something that participants can genuinely trust.
Those questions do not have universal answers. But they are increasingly worth having in earnest.
Addition Wealth partners with financial institutions to deliver personalized, AI-enabled financial guidance at scale. Learn more
A fundamental shift is underway in financial services, moving past basic generative AI tools toward agentic AI.
While various forms of AI agents are already being deployed across industries, their use in the highly regulated financial services sector is now attracting increasing attention and investment.
This is not an incremental update; it is a different kind of capability—one that raises profound questions for financial institutions about how to manage governance, accountability, and scale.
What agentic AI actually means
Most of the AI conversation in financial services has centered on generative AI: tools that answer questions, surface content, summarize documents, draft communications. That layer has real value. But it is fundamentally passive—it responds when asked.
Agentic AI is different in that it can take sequences of actions across systems, adapt based on what it finds, and complete multi-step tasks without a human initiating each step. In a financial guidance context, that distinction matters.
A generative AI tool can answer a participant's question about their contribution rate. An agentic system can identify that a participant hasn't adjusted their contribution since their salary increased 18 months ago, recognize that they're within a rebalancing window, and surface a personalized prompt at the moment they log in—without waiting to be asked.
That is a different architecture for how guidance gets delivered and a different set of questions for the institutions considering it.
For us at Addition Wealth, we think of this as proactive financial guidance vs. reactive.
Where AI stands in financial services today
NVIDIA's 2026 State of AI in Financial Services survey found that 65% of financial services firms are now actively using AI, up from 45% the prior year. Of those, 89% reported meaningful contributions to revenue or cost reduction. And 42% are currently using or evaluating agentic AI specifically, with 21% reporting full deployment.
At the same time, 62% of AI initiatives across sectors remain in pilot or development phases, with only 7% of financial institutions having scaled AI across their full enterprise.
A March 2026 survey of 650 enterprise technology leaders found that 78% of enterprises have agentic AI pilots underway—but only 14% have reached production scale.
The picture that emerges is one of genuine momentum alongside genuine caution. Both make sense. The technology is moving quickly, and the environments financial institutions operate in (regulated, trust-dependent, high-stakes) make thoughtfulness and precaution appropriate.
The opportunity in financial guidance
The case for agentic AI in financial guidance comes down to a structural gap that has always existed, but is now possible to address differently.
Research from The Financial Brand found that 80% of consumers expect their primary financial institution to help them improve their financial health, yet only 14% believe that is actually happening. That gap is not new but the underlying reasons are familiar.
Human-delivered guidance does not scale linearly. The average financial advisor manages between 50 and 150 clients. In the workplace and retirement space, a single plan may serve hundreds of thousands of participants. The people with the largest balances or most active relationships have tended to get the most attention. Everyone else receives a more standardized experience—not out of neglect, but because the delivery model has structural limits.
What agentic AI introduces is the possibility of reaching more people with guidance that is actually responsive to their situation. KPMG's analysis of agentic AI in wealth management found that 70% of respondents anticipated using it to deliver personalized guidance that was previously available only to high-net-worth clients. EY's 2025 survey found that 78% of wealth management firms are already exploring agentic AI for deeper operational use.
From the participant side, there is openness too. Invesco's Defined Contribution Participant Pulse Survey found that 55% of respondents would trust an AI-powered tool to manage their retirement investments—more than they would trust a family member. The condition they named was transparency: they want to understand how decisions are being made.
Building responsibly with AI
One of the most important things to understand clearly is what agentic AI can and cannot do in a financial guidance context.
AI systems can identify patterns, surface relevant information, personalize timing and content, and flag moments that warrant a conversation. What they cannot do (and should not be positioned to do) is replace the judgment and accountability of a licensed financial professional.
Regulatory frameworks are being updated to reflect this. The SEC has set AI rules and compliance as a 2026 examination priority, with particular focus on conflicts of interest in investor-facing AI applications. Hogan Lovells' analysis of the regulatory landscape notes that the path forward is governance design—building systems where human oversight is genuine.
The distinction matters in practice because a financial recommendation generated by an AI system is still the responsibility of the advisor or institution that deploys it. The AI surfaces something, but the professional is accountable. Institutions working through agentic AI implementation are finding that the most productive design question is not "how much can the AI decide?" but "where does the AI's role end and the advisor's begin?"
Northwestern Mutual's 2025 Planning and Progress Study offers a useful signal here: 47% of Americans said they would prefer to work with a financial advisor who understands and actively uses AI. Not AI instead of an advisor—AI in the hands of one. That framing reflects where participant expectations appear to be settling.
The technology is here. So is the complexity.
AI is not a future consideration for financial services. It is a present one, but it is also worth understanding why the pace has been measured.
The primary reason is rigorous compliance and general caution. However, even beyond any regulatory hurdles, implementation is incredibly difficult because of legacy infrastructure. Financial institutions carry decades of accumulated systems, and connecting agentic AI to those environments in a way that is reliable, auditable, and scalable takes significant engineering resources and effort. There are also open questions around data governance, model explainability, and how to supervise AI behavior at scale—challenges that lack simple, off-the-shelf solutions.
And then there is the human side of implementation. AI systems that surface financial guidance are only as useful as the advisor ecosystem around them. PLANADVISER's 2026 analysis found that 94% of industry panelists agreed retirement platforms would move toward AI-generated, hyper-personalized content—but the same conversations consistently return to the importance of the human access point. Speed of deployment matters. So does the quality of what gets built.
The institutions further along tend to be moving with a sense of what the AI is for, where human judgment fits in, and how they will measure whether it is working.
A moment worth watching closely
Agentic AI in financial is an evolving one and the institutions working through it are doing so with stakes on both sides: the risk of moving too fast without adequate governance, and the implications of moving later rather than sooner.
What seems clear is that the institutions finding their footing are the ones treating this as a design problem as much as a technology one, thinking carefully about where AI belongs in the guidance relationship, where humans remain essential, and how to build something that participants can genuinely trust.
Those questions do not have universal answers. But they are increasingly worth having in earnest.
Addition Wealth partners with financial institutions to deliver personalized, AI-enabled financial guidance at scale. Learn more

