If there was one thread running through the Sunday sessions I attended, it was trust. Not the abstract kind, but the practical, high-stakes version that keeps banks and payment networks up at night.
Chad Hamblin of Microsoft dropped the first jolt: phishing attempts are up 800% in the past 12 months. Fraud, he said, has shifted from a cottage industry of bad actors to a crowd-sourced economy of AI-armed opportunists.
And now comes a stranger frontier: agentic commerce. Samant Nagpal from Gusto noted that by 2030, we’ll see some 6 billion AI agents transacting on our behalf. That’s when the familiar question, “Are you a bot or a human?”, morphs into something trickier: “Are you a good agent or a bad one?”
Which leads to one of my favorite soundbites of the day: KYA – Know Your Agent.
For decades, we’ve built financial ecosystems around KYC – Know Your Customer. But as autonomous agents start negotiating payments, booking travel, or moving funds, institutions now need to verify not just who the customer is, but who (or what) is acting for them.
It’s an entirely new layer of trust infrastructure taking shape before our eyes.
Both Visa and Mastercard seem to see it coming:
1. They Deliver Strategy + Execution in One Motion
- Visa recently unveiled its Trusted Agent Protocol to help merchants authenticate AI agents in commerce flows.
- Mastercard introduced its Agent Pay Framework, setting out early rules for registering and verifying AI agents so that transactions can be tokenized, auditable, and aligned with the human they represent.
These are the first brushstrokes of what could become a whole new compliance category – a future where banks and networks perform due diligence not just on customers and counterparties, but on the autonomous agents acting in their stead.
It’s fascinating to watch trust being re-architected in real time. As AI agents grow more capable – initiating transfers, negotiating rates, maybe even comparison-shopping across banks – the industry’s challenge won’t just be stopping fraud. It’ll be preserving confidence in an ecosystem where intent itself can be simulated.
Tomorrow’s trust, it seems, will depend less on whether a customer is who they say they are, and more on whether their agent is doing what they meant it to do.



