Mirela Ciobanu
13 Aug 2025 / 5 Min Read
Identity expert Emma Lindley warns that the impact of Agentic AI - whether beneficial or harmful - will hinge on solving the identity and trust challenge before this technology becomes deeply integrated into our daily lives.
We are being overrun with robots, but unlike the previous 20 years, when all robots on the internet were considered ‘bad’, we are now entering an age where robots on the internet could also be ‘good’ and helpful. This upcoming wave of artificial intelligence is called Agentic AI, which is touted as having the ability for advanced reasoning and step-by-step planning to independently tackle intricate, multi-stage problems.
We’ve gone from asking, ‘Can AI answer my question?' to ‘Can AI handle this whole thing for me?’ If you’re new to the topic and are not sure what the differences are between ChatGPT and Agentic AI, I find this explainer useful.
The industry has started calling these systems ‘good bots’ to distinguish them from their malicious cousins. The idea is simple: instead of blocking bots at the gate, we might one day welcome them in, provided we can trust them.
Inside an enterprise, Agentic AI could coordinate tasks across departments, pull data from multiple systems, handle employee onboarding, detect fraud patterns, optimise supply chains, and even negotiate contracts. Outside, it might act as a personal concierge, shopping for the best deals, scheduling appointments, paying bills, or organising events.
The possibilities are enough to get big organisations very excited.
The pitch is seductive: imagine a world where you outsource the boring, repetitive, time-consuming admin of life to an AI agent that works 24/7. Your holidays are planned and booked, your finances are optimised, your fridge restocks itself, your appointments are made, and your reminders are handled without you lifting a finger.
If you’re a productivity junkie, it’s a dream come true. If you’re a company selling these services, it’s an opportunity potentially worth billions.
McKinsey’s ‘Seizing the agentic AI advantage’, Visa's: ‘AI commerce — commerce powered by an AI agent — is going to transform the way consumers around the world shop’., PayPal’s ‘Power your agentic AI future with PayPal’.
Sounds amazing, where do I sign? After all, think of all the things I could do if someone else were looking after my life for me.
So, if these agents can book a holiday and open bank accounts on my behalf, I’ll need to give them permissions of some kind, like my Google calendar or my email account.
Hang on a minute… I have questions, and so, it turns out, do some other folks in space. The President of Signal, Meridith Whitaker, has warned of the security (and identity) implications of Agentic AI. In the ‘Delegated Decisions, Amplified Risk’ session at the United Nations AI for Good Summit, she said that an example of the access that AI agents require is: ‘To make a booking for a restaurant, it needs to have access to your browser to search for the restaurant, and it needs to have access to your contacts list, and your messages so it can message your friends’.
That’s a lot of access. And let’s imagine the agent is opening a bank account for you. What kind of access would you need to give away then?
Certainly, it makes you think.
One might wonder why we have released Agentic AI without thinking through how we might make it secure, but with capitalism as a driver and the threat of other nations getting there first, here we are…
In theory, this is where industry standards should help. The emerging Model Context Protocol and similar frameworks aim to give agents structured ways to interact with systems. But as it stands, these early standards don’t go far enough.
They tend to focus on functionality, how the agent communicates with APIs, how context is passed, and how permissions are requested without fully addressing identity and security. Without those foundations, the risk is that your ‘good bot’ becomes someone else’s attack vector.
What is clear is that identity and security have a critical role to play in Agentic AI.
The good news is that in the identity world, this isn’t a new problem; it’s called Delegated Authority. The bad news is that enterprises and other organisations are not very good at it (yet). Just ask anyone who has had to go through the Power of Attorney process for a relative with Alzheimer’s; they will know this all too well. We have to solve it, and Agentic AI is going to be a catalyst.
The best and brightest minds in identity and payments are starting to work on it. The clever folks over at The Identity Salon have already been doing some good thinking into how this can be extended to agents so that you know your Agent Smith is yours, and not someone else’s, or Agent Smith gone bad.
My good friend Dave Birch, Jelena Hoffert (Mastercard), and Kirsty Rutter (Lloyds) have also been doing work in this space and have coined the phrase KYA or Know Your Agent. And Jamie Smith is looking at things through the customer lens. If you are not following these folks yet, you should.
At its core, Delegated Authority or KYA means:
Delegated Authority needs a governance framework. This ties in with ethics: what can we delegate, and what should we delegate?
Plenty of questions remain:
Agentic AI is inevitable. Safe Agentic AI is not. On the one hand, we have strong economic incentives, great productivity potential, and high consumer appetite. On the other hand, the costs of getting this wrong economically, societally, and politically are very high indeed. At present, the risk of getting it wrong is very high. If we get it wrong, we could easily end up rendering useless potentially very helpful technology, or worse, weaponising it against ourselves. Mitigating that risk is well within our grasp.
As individuals, we’ll need to get comfortable with the concept of KYA. As organisations, we’ll need to invest in identity-first security for agents. As an industry, we’ll need to create standards that treat identity and trust as first-class citizens, not afterthoughts.
Bottom line: The robots are here. They might be good. They might be bad. The difference will depend entirely on whether we solve the identity and trust problem before Agentic AI becomes woven into the fabric of our lives.
About author
Emma Lindley MBE is a globally recognised leader in fraud, AI, payments, and digital identity. With over 20 years’ experience spanning finance, ecommerce, and government, she has held senior roles at Visa and GBG and co-founded two successful businesses. Named among the UK’s 100 Most Influential Women in Tech, she advises governments and global enterprises on emerging technologies, growth strategies, and M&A, blending strategic vision with deep industry expertise.
Mirela Ciobanu
13 Aug 2025 / 5 Min Read
The Paypers is the Netherlands-based leading independent source of news and intelligence for professional in the global payment community.
The Paypers provides a wide range of news and analysis products aimed at keeping the ecommerce, fintech, and payment professionals informed about the latest developments in the industry.
Current themes
No part of this site can be reproduced without explicit permission of The Paypers (v2.7).
Privacy Policy / Cookie Statement
Copyright