Voice of the Industry

More human than human? Governing AI and machine identities in financial services

Thursday 26 June 2025 08:00 CET | Editor: Mirela Ciobanu | Voice of the industry

Henrique Teixeira, Senior Vice President of Strategy, at Saviynt shares more about how financial institutions can govern human and machine identities with confidence.

 

Artificial intelligence (AI) is not new, but its mainstream adoption happened in a recent turning point in 2022, with the arrival of OpenAI’s ChatGPT and other generative AI applications. At the same time, identity-based attacks surpassed all other vectors of breaches, compounding the challenges of a brand-new AI attack surface. Soon after that, AI applications evolved to perform more agentic functions, as part of a non-human workforce reshaping the landscape of commerce and financial services, and the nature of identity security.

KuppingerCole analysts predict that the agentic AI market will grow from USD 5.1 billion to USD 47.1 billion by 2030. Both cybersecurity and identity leaders of financial institutions must be ready to understand the risks and reap the benefits of this non-human identities (NHIs) ecosystem, which includes AI agents but is broader than that.

 

The rise of machine identity

NHI is a broad category that includes many sub-categories of identities, including workloads and devices. These are called machine identities. Machine identities, especially workloads, include AI agents, robotic process automation (RPA) bots, and other non-human entities. They are now the majority of the critical actors in digital systems, already outnumbering their human counterparts at an order or magnitude that varies from 20 to 40 times bigger.

In banking, automated credit scoring systems, algorithmic trading bots, and customer service AI agents are deeply embedded in operations. AI agents and other machine identities already empower and augment the work of humans. They manage core financial functions – from fraud detection and credit scoring to real-time payments and cloud infrastructure. It is expected that AI agents will surpass and replace the need for the use of traditional apps and continue to accelerate productivity.

However, these entities can make decisions, access sensitive data, and interact with internal and external systems, without the traditional governance structures applied to human identities. Because of their volume and speed of growth, combined with their usually excessive permissions, NHIs deserve special security and compliance controls.

 

Security risks in the machine era

Machine-to-machine traffic now dominates many digital ecosystems—including financial services. They communicate with APIs, manage workflows, and process data at a higher scale than humans. However, they are also increasingly vulnerable to exploitation.

Common risks associated with NHIs include:

  • Poor lifecycle management, like improper offboarding of AI apps or machine accounts, leaves dormant accounts exploitable.

  • Secret leakage, where static API keys or tokens are long-lived, hardcoded, and stored insecurely or exposed through code repositories.

  • Overprivileged identities, where bots or agents have more access than necessary.

  • Insufficient access controls, like Attribute-Based Access Control (ABAC) or Policy-Based Access Control (PBAC) and other fine-grained access controls to model context protocol (MCP) servers.

  • Unapproved critical actions, as the current AI infrastructure lacks consent flows for critical NHI actions which can lead to execution of destructive actions like submitting unreviewed changes.

These aren’t theoretical concerns. Several breaches involving NHIs have been documented. These gaps are particularly dangerous in financial environments, where access to sensitive data, customer information, and payment systems must be strictly controlled. Financial institutions, already prime targets for cyber threats, face added pressure.

 

Why traditional identity models fall short

Human-centric identity controls depend on the way humans behave and rely on the enforcement of policies based on an accountability-based model. These controls however are insufficient when:

  • Machines can’t use a phone for MFA - Multi-factor authentication (MFA), biometric checks, are helpful in human interactions. Machines, however, can’t use a smartphone app, respond to an SMS, or have their faces scanned.

  • Not all machines are made the same - Traditional tools that monitor and govern RPA bots are designed for predictable, rule-based behaviour. It often falls short when applied to AI agents that operate in non-deterministic ways.

  • There is no source of truth, no ownership, no accountability - Unlike human identities, machine identities lack a central authoritative source like an HR system. This makes visibility the first major challenge in mitigating their risks. Organisations often don't even know where these identities exist. That invisibility is further complicated by a lack of clear ownership since machines and service accounts aren’t held accountable like human users.


A strategic response: operationalising ownership

To address these gaps, a new approach that treats machine identities as first-class citizens in the identity ecosystem is required. A compelling first step to cybersecurity and identity leaders in finance is to focus on the operationaliaation of machine ownership:

  • Find AI with AI: Use intelligent discovery, like identity security posture management (ISPM) tools to identify machine identities across hybrid and cloud environments.

  • Protect AI with AI: Vault machine secrets and credentials, enforcing privilege access controls, and session management with automation.

  • Govern AI with AI: Assign human ownership over machine identities, and establish governance like access reviews, and policies for separation of duties (SoD).


A call for governance

Identity is foundational to security. As AI agents and machine identities multiply across digital environments, the same principles that govern human identity must apply to non-human ones. Just as people need verified identities to function in society, machines require defined, secure identities to operate safely within digital infrastructures.

Modern identity security solutions must support this evolution. That means managing both human and non-human identities with equal depth, starting with unified discovery, granular risk analysis, and clear visibility into lifecycle states. These capabilities help organisations identify risks early and respond with precision.

Automation is critical. Identity platforms should enable just-in-time provisioning and de-provisioning, with policy-driven alignment to compliance frameworks such as FFIEC. For financial institutions, this reduces manual effort while maintaining continuous compliance and audit readiness.

But this isn’t just about tools; it’s about governance. For regulated industries, this governance is essential for compliance with data protection and cybersecurity frameworks. The Financial Conduct Authority (FCA) and European Banking Authority (EBA) are increasingly focused on operational resilience, including the secure use of automation and AI. Identity is a control surface that connects these priorities—enabling visibility, accountability, and proactive control.

Finally, effective solutions must empower security teams to act quickly without disrupting business. Risk-based access reviews, emergency access workflows, and adaptive controls allow faster response to anomalies while preserving oversight and minimising friction in the user experience.

 

Conclusion

As we move toward a future where machines may become more human than human, the core question is no longer whether we can trust them, but how we should govern them. For banks, fintechs, and payment providers, the first step is visibility: identifying and understanding non-human identities today in order to embed machine identity into the broader security and governance framework.

The institutions that will lead in the years ahead are those that apply the same level of rigor, scrutiny, and policy enforcement to machine identities as they do to human ones. Trust begins with control, and control begins with visibility. Now is the time to act. Conversations about NHIs must start today.

 

About Henrique Teixeira

Henrique Teixeira is a seasoned leader with over 25 years of experience in identity and cybersecurity. As Senior Vice President of Strategy at Saviynt, he drives innovative, cloud-first solutions in identity governance and privileged access management to secure enterprises globally. Previously, Henrique was a VP analyst at Gartner and held impactful roles at Microsoft, IBM, Oracle, and more.

 

 

About Saviynt

Saviynt empowers enterprises to secure their digital transformation, safeguard critical assets, and meet regulatory compliance. With a vision to provide a secure and compliant future for all enterprises, Saviynt is recognised as an industry leader in identity security whose cutting-edge solutions protect the world’s leading brands, Fortune 500 companies, and government organisations. For more information, please visit www.saviynt.com.



Free Headlines in your E-mail

Every day we send out a free e-mail with the most important headlines of the last 24 hours.

Subscribe now

Keywords: identity verification, non human identity, AI identity fraud, multi-factor authentication, artificial intelligence
Categories: Fraud & Financial Crime
Companies:
Countries: World
This article is part of category

Fraud & Financial Crime