
Fintech Layer Cake
Welcome to Fintech Layer Cake. A podcast where we slice big Financial Technology topics into bite-sized pieces for everybody to easily digest. Our goal is to make fintech a piece of cake for everyone. Fintech Layer Cake is powered by Lithic — the fastest and most flexible way to launch a card program.
Fintech Layer Cake
Simon Taylor on Agentic Payments and Opportunities
In this episode of FinTech Layer Cake, host Reggie Young welcomes back Simon Taylor—fintech thought leader, writer of Fintech Brainfood, and Head of Strategy & Content at Sardine.
They explore the rapidly emerging world of agentic payments and what Visa’s new tokenization framework means for AI-driven commerce. Simon also unpacks real-world use cases, from “vibe shopping” to automated business purchasing, and what hurdles—like consumer trust and fraud—still stand in the way.
The conversation then shifts to Sardine’s groundbreaking Agentic Oversight Framework, which shows how AI agents can safely handle complex compliance tasks like KYC.
Whether you’re building AI tools, working in compliance, or just curious about the next frontier in payments—this is an essential listen.
Simon Taylor on Agentic Payments and Opportunities
Reggie Young:
Welcome back to Fintech Layer Cake, where we uncover secret recipes and practical insights from fintech leaders and experts. I'm your host, Reggie Young, Chief of Staff at Lithic. On today's episode, I chat with Simon Taylor, the author of the widely read Fintech Brainfood and head of strategy and content at Sardine.
In the past six months or so, the AI conversation has changed from talking about AI to talking about agents and agentic workflows. For folks who might not be familiar, an agent refers to an AI application that can actually engage with the digital world and take actions for you. So instead of just giving you text answers or images, it can go on websites and take actions for you, like booking a reservation. At Lithic, we've been thinking and preparing ourselves to support this agentic payment transition, so this is very top of mind for us.
Simon joined me to talk about two big AI agent topics. First, Visa announced a path for supporting AI agentic payments with a new tokenization framework. And second, Sardine released a cutting-edge white paper laying out how to use AI agents safely in AML and related workflows based on actual case studies they've done. Both of these are exciting and game-changing developments in fintech, so you don't want to miss this episode.
Fintech Layer Cake is powered by the card-issuing platform, Lithic. We provide financial infrastructure that enables teams to build better payments products for consumers and businesses. Nothing in this podcast should be construed as legal or financial advice.
Simon, welcome back on the podcast, one of the few multi-time guests I've had, so excited to get you back on it and chat some more. A lot of fun, exciting topics flying around in fintech right now, which is the reason I reached out to get you on because I know you're a great person to chat about with all of them.
I think Sardine's put out an awesome white paper that I want to get to, folks should absolutely read and be aware of. But before then, I'd love to jam on agentic payments. We're recording this Friday, May 2nd. This past week, a lot of news from the card network. Mastercard on Tuesday put out a press release. Visa on Wednesday had a big product demo press release around a new framework for agentic payments, specifically a new tokenization framework. In essence, I think their idea is you give an AI agent authorization to make a purchase based on like, hey, here's a scope of authorization.
The ecosystem creates a token for that agent that says, hey, this agent was allowed to spend in these sort of use cases under these circumstances. And that agent goes and makes purchases for you, passes that token. Instead of your actual card, it's like digital wallets, passing tokens, not the actual PAN numbers, all that kind of stuff. Then the merchant can run that token instead of your actual card number if it's within the agent's authorized scope. Again, it's like this funny, big overlaying consideration for agentic payments is like, what's the authorized scope of an agent? When do they go outside that scope? How do you handle that? It's kind of a new frontier that just tickles my lawyer brain, a lot of fun questions there.
So fun, exciting. It's great to see. There's been a lot of headlines about AI, a lot of potential for AI and payments. We've seen some application in agentic fraud and AML stuff, which we will get to in the Sardine white paper discussion. I find this really exciting. This is kind of the first really substantive progress news I've seen in the payments card side of things, so super excited to see this news today.
I think my first question is, is this going to work? Do you think that the tokenized AI framework is going to have purchased and be taken up?
Simon Taylor:
Well, let's see. The honest answer is I don't know. But generally, the pattern is e-commerce, we saw it come along, and the card networks were going to be displaced by PayPal, but they weren't. What ended up happening is people ended up putting their 16-digit card number and their expiry, typing it in directly. So the card networks were just fine. And then it turned out that wasn't very secure, and so they came up with this tokenization idea.
The mobile commerce. Mobile commerce was absolutely going to kill the card networks because there were going to be all these new things from Apple Pay, but it ended up just reinforcing the card networks and kind of coming back around. And this is when mobile payments actually started to leak into digital wallets and Apple Pay, where you can now add your card to your Apple Pay wallet, to your Google Pay wallet. And when you do that, you tokenize the card itself.
Now the vast majority of traffic on Visa and Mastercard's network is in fact tokens. It's not the classic [make] Stripe stuff. It's a major upgrade from where we've been. There's network tokens. There's processor tokens. It's tokens all the way down. There's a Token Layer Cake that Chuck Yu over at VGS wrote on Brainfood a little while ago. And I thought it was a really good explanation of like, actually, there's network tokens and then there's all of these other ones. If you're a VGS, a basis theory, they live and breathe that stuff, and it gets quite technical and quite complex.
If you follow that pattern, then it's quite logical that- just as I turned a card into a token and was able to put it into an Apple wallet, can I turn a card into a token and put it into an agent? And when I put it in there, I know it came from that agent. I'm able to authenticate that agent and know which particular agent it was because it's got a unique token. And that unique token can have properties associated with it, to your point about what it was not allowed to do and allowed to do, a little bit like a virtual card.
And I think the fact that both card schemes announced this, the fact that it follows Stripe and PayPal and a few others who have SDKs and they were sort of doing a kludgy thing with virtual cards, this really brings it into like, here are the tools you can build with. So we'll see if this gets any volume, because right now, OpenAI's operator is hiding inside of a box, and there is no agentic commerce volume whatsoever. Or if there is, most people don't know it exists. And it's probably bad actors at this point. The only ones sophisticated enough to be buying clothes with agents are going to be fraudsters. So we're not doing that. The consumer-grade tool is not there. And the business-grade tool, frankly, is just not there. So will it work? It's a great question.
There's always that little part in the back of my mind that was like, do you remember IoT? Do you remember machine-to-machine commerce? There's a chance that when everybody's talking about a thing, eventually, you kind of go, ugh. But the other side of my brain goes, I remember this with mobile payments. I'm old now, man. I remember Nokia were going to have SIM cards that were going to do mobile payments. Mobile payments were just the thing that would never, ever, ever, ever happen. And then Apple Pay came on, and then they slowly started to happen.
If you wind the clock forward far enough ahead, yes, you can see that agentic payments are coming. The problem with AI is it just has this annoying habit of going, boom. Now the world has changed again. And now you can have everything in Ghibli art style, and you can generate any image you want. Your family are now all in anatomy. AI just does that stuff.
So if that happens with agentic commerce, it's just, pfft, Open AI decides, wakes up tomorrow, chooses violence, and you go, you get a personal shopper, then it'd be kind of handy to have these tokens. There's a little bit of me is like, people are testing this, though, because have you seen the amount of things that can go wrong with model context protocol? Have you seen how easy it is to do prompt injection? Have you seen how much could go wrong if an agent just starts buying the wrong crap? What does the liability framework look like if your agent does a chargeback?
Reggie Young:
It's a lot of really fun, novel, thorny questions to think through. I like your analogy. Is it more like IoT, or is it more like mobile payments? That framework will evolve, but it's fairly unknown. Visa's proposal, it's a great one. It's a phenomenal starting place. It's also like, okay, we'll see what the market does. The card now, we're just going to put out these, here's how we propose things will run, but ultimately, it'll be kind of, are consumers comfortable? I think that trust layer is step one. Do you get consumers comfortable and businesses comfortable trusting agents?
Simon Taylor:
And once you've got the tokens, you can start playing with it. What I thought was interesting about Visa's announcement is they announced a few partners, Open AI, Samsung, definitely a lot of very well-baked, they've clearly been thinking about it. RAM and a few others. I think Arcade was in there, the agent-building toolkit. So this is not an amateur move. This is somebody that's been thinking about it for a while. Mastercard was a day earlier. It also did tokenization. It did not have a lot of partners.
Reggie Young:
I had a good chuckle at that. I had a good laugh at that for sure.
Simon Taylor:
Classic card networks, whatever-you-can-do type of stuff. But yeah, you're right, that nobody knows, but it does feel like AI will do that thing where it just goes, it's suddenly here. And to be able to test with network tokens is actually really good first hot ground to be started.
Reggie Young:
No, it's interesting. When product market fit hits with AI, it's fast. So you’ve got to be ready for it. I expect we'll have some uptake, but I also imagine the tokenization framework is great. But you can also effectively have an agent that's built in a PCI-compliant way handle just kind of standard PANs. You have to be very careful, right? There's a lot of thoughtful stuff to do there, but I'm kind of expecting long term to end up in a place where you have both models of like, oh, traditional cards are just handled by agents because maybe not every acquirer is set up to accept agentic tokens, like details, TBD and all this stuff blends out.
I think that ties nicely to my next question, which is where do you think the value for something like the agentic tokenization option, or just agentic payments generally, what are the use cases where there's actually going to be- a lot of the demos and the easy examples are like, I'm hungry, I'm going to order DoorDash and have ChatGPT find me the best Indian food that can deliver in the next 45 minutes. That's an interesting thought exercise, but I think there's a lot more under the surface. The under-the-surface question mark stuff is what I'm excited about. So I'm curious where you think the value in agentic payments, what use cases it's most going to.
Simon Taylor:
I think it's vibe shopping. Actually, the decision paralysis of shopping is still a pain in the backside. I use O3 to make basic purchasing decisions quite often because it's going to think things through, and it's going to tell me why it made the choices. It didn't give me a couple of candidates and narrow things down for me, and I can click buy or click right through to a couple of things. It's having that personal assistant who can just take that part of your brain and run away and think about it.
The example Visa gave, I actually thought was a good one, which is, you got a wedding coming up, you need a new outfit for it. So it looks up the location of it. It looks at what guests typically wear at such a thing. It takes your sizes that it knows about and orders you the stuff to arrive at your hotel when you get there. It's some destination-wedding-type thing. That's the sort of thing where it's just like, it's 45 minutes of work to do that stuff.
But I also think, just personal use case, just ordering the groceries every week. Just change it up a little bit. Rotate the snacks for the kids. Rotate the good vegetables. Prepare my menus. Just offload my admin. So I think on the consumer side, it's solved the decision paralysis and just go do the thing. It's where you've got a go decide and then buy, because the buy is like the end of, typically, a complex process of thoughts, unless you've got your favorites that you go to, in which case- then also just not having to click through the menu to get to my favorite and just go and I just want my favorite again, and the rest just happens.
So I think that vibe shopping is genuinely the use case of like, there is still a lot of friction that we don't see. Paul Graham called it slack blindness, this idea that you're so used to doing stupid things. I find this quite often that- I get frustrated at doing the same thing every day. And I think I'm just wired that way. These tiny micro frictions really bother me. The vast majority of humanity does not notice those micro frictions whatsoever. It's just like, that's how you achieve that task, and I will follow this route to achieve that task. And I'll try and do it slightly faster because I'm getting really efficient at it. And I'm not going to complain about it because I'm a normal human being, and I'm just getting on with stuff. Simon complains about this stuff because he's a cantankerous old man now. It's like, no, why can't we move the light switch to make it slightly more convenient for me? This is really irritating me at this point.
Reggie Young:
Once you've spent a lot of time thinking about fintech, like products and the product experience, it's really hard not to- I experienced this in our apartment in San Francisco. We have a little backyard. But to get to the backyard, you have to go through the stairs in the garage and walk out. I'm like, that's just so much unnecessary friction. We should just be able to go directly back out there. Once you see that stuff, you can't do it.
Simon Taylor:
But you see this with live coding now, right? Engineers still have to know what they want, and what they want to achieve, and what they want to do, and the best way. There's still a massive gap between a great engineer that can use one of the modern coding tools, like Windsurf or Cursor, to me. I can make something happen on Cursor that might have been quite difficult, that wouldn't have compiled for me in the past, or I'd have been in Stack Overflow for days. That's a good compression, from my perspective. And I can use Lovable, and I can create believable little websites. It's quite nice for that. But if you're a professional, you still get this upgrade. I think that, for commerce, is hugely valuable, and for businesses as well. The idea of vibe as the interface is actually the breakthrough here.
Reggie Young:
I love it. Agentic payments are really about vibe purchases. I think it's right. To me, on the consumer side, my thesis is all the adoption is going to be around- I think zooming out, agentic payments is a really good example of fintech is about workflows and not payments. And so on the consumer side, it's like, I don't just want to buy this one thing. I want an agent to take care of this workflow of making the decisions. I think on the consumer side, you'll see it. You can hint to those like the wedding planning stuff. I have a host of decisions, and I don't want to spend the two hours digging into what hotels and what flights and putting all that together when I can have an agent do that.
One of the really interesting aspects of Visa's demos, which folks should go watch the demo video they put out if folks haven't seen it, because they covered three or four interesting use cases. One of them is a business. I think it's a flower shop. It's like, I need to restock my flowers. It's like an agent that's set up to notice that your flowers are running low, and then just ping you with a button that's like, do you want me to order this for you? My thesis is, yeah, there's good stuff on the consumer side, but actually, I think a lot of the adoption will hit on the business side early.
Simon Taylor:
So much more, because there are so many things like, yes, inventory management, but there's a bunch of stuff you have to buy. Do you want me to not pay this this year because you're not really using it? So many of those things. And they all start to disappear.
Jason Bates, who was one of the founders of Starling and Monzo, and I used to work with at 11:FS, gave a couple of examples that I think are extremely relevant to this. And he's been talking about end-to-end journeys for a very long time as workflows. The classic example he uses is Uber. The first real end-to-end workflow that anybody put together was Uber, which is you don't search for a taxi number anymore. You're not looking for the number online. You're not calling the taxi. You're not explaining to dispatch where you are, where you're trying to get to, and then receiving a price. You're not then having to explain to the taxi driver how to find you.
Then when you get into the taxi, you go to your destination. And when you get out, you don't have to pay anybody. So that's what? Six steps they removed from the process, where you just press a button, a car shows up, you get in and you get out, and then the payment is taken care of behind the scenes.
People didn't realize that getting a taxi was a schlep. Nobody was thinking that this is a really painful experience. But if you see those schleps, you can remove them. How many of those are there in everyday shopping experience today? And what he talks about for customer service and finance was moving away from this idea of being a bad landlord, which is, here's a fee because you accidentally went over your overdraft. But we stacked the payments just to make sure you'd go over your overdraft, and now you got this fee, fee, fee. Would you like a fee with that? Here's some fees. I put a fee on my fee, and then turning into being the good waiter.
And what you just described was a good waiter, which is, oh, your wine's slightly too low there, let me just fill that for you. In fact, the ideal one, you barely notice and your glass is never empty. That sort of invisible, taking care of stuff in the background, that's where agents go. And the flower shop example is the classic good waiter, which is goods, magical stuff is just happening. And when I want a new thing to happen, all of that friction has gone, all of those tiny schleps have gone away.
Reggie Young:
I love it. What do you think the big problems that agentic payment is going to hit are? I kind of hinted at one of consumer trust. Consumers are going to have to trust that they can give an agent authorization to buy on their behalf, which I think that trust level is going to vary by use case and segment and all that. But I'm curious if there are other big problems you think that agentic payments are going to hit, just like product or psychological.
Simon Taylor:
Merchant trust. This thing that has showed up on my website, is it a good agent? Is it a bad agent?
Is it just here to do promo and refund abuse, or is it acting on behalf of a good customer? Is that a customer that I want to be building some brand presence with and some loyalty with? Do I want to be offering this customer a discount? Should I be retargeting that customer? How do I retarget them if all I can see is a token from a card network, or all I can see is clicking? This looks like computer-driven activity. It looks like a machine. It doesn't look like a human. It's moving too quickly, but I don't know which one it is or who it is. And I don't know if it's a good one or a bad one.
Then there's Jeff Weinstein. Stripe introduced this concept to me of like, good agent, bad human. It's a perfectly good agent. It has been bought legitimately, but a human is using it for nefarious purposes. Or there is good human but compromised agent. That agent has been hacked in some way or is doing things on behalf of something else, or it's just a malware agent that the user has unknowingly installed and is double spending and then is issuing a load of chargebacks under that person's name. How do you know that's not happening? What does consumer protection look like? The rabbit hole is endless on this stuff.
And I think this is what the card networks have always been great at, which is like, we'll just do a lot of lawyering, and then we'll issue a bunch of rules about what you're supposed to do. Everybody's going to hate it, especially merchants. There'll be lawsuits, and then eventually, everything will work out and we'll make loads of money.
Reggie Young:
A lot of small businesses like to gripe about interchange, but they don't realize that whole framework and the benefits that come with that behind it. And it's funny, you're framing- I actually think one of the big thorny problems is I frame it as like the botting problem, which I actually think is another angle of the merchant trust problem of how does this merchant know that this is a legitimate AI agent with authorization and not just a bot that's trying to spam their site. I think this is the essence of Visa's tokenization framework. But still even with tokens, how do you know it's not a compromised bot or it's not a bad human controlling a good agent?
Simon Taylor:
Well, I mean, Apple Pay theft and Apple Pay-based scams are becoming increasingly common. Real common scam is to create a fake website of Temu or Shein or something like that, and then back to back it with an Apple Pay authorization. So you go by what looks like, I don't know, you're buying a new throw for the house or like switches so you can move them. And when you go buy those, you enter all of your Apple Pay information. It just asks you to go through like a two-factor auth thing on your phone, like you're setting up Apple Pay, that sort of thing.
Behind the scenes, there's somebody setting up Apple Pay with all the PAN and expiry details that you've just given them through the website form and your two-factor authentication approval on a new device. And then they'll use it to make a lot of install purchases a couple of days later. That type of fraud is exploding inside of some of the banks right now. And so fraudsters are creative. What will they find once this stuff's out there, and it's all software? It's going to be a zoo.
Reggie Young:
Yeah. If only they had become product managers in fintech instead of doubled down on their fraud careers, so there will be a better place. No, I love it.
I'm going to switch gears because we absolutely have to chat about Sardine’s white paper. Sardine put out a great white paper a few weeks ago titled, The Agentic Oversight Framework - Procedures, Accountability, and Best Practices for Agentic AI and Regulated Financial Services, which is a big, bold title. But maybe let's just start basics. What's the kind of one- to two-sentence summary? I find it an awesome paper. Why should folks go check it out and read it?
Simon Taylor:
A lot of bankers particularly, but a lot of folks in compliance want to use AI agents but don't know how to explain it to their bank partner, don't know how to explain it to their regulator. This white paper is how you explain it to your bank partner and how you explain it to your regulator. That's why you should go read it.
Specifically, we found that you can use AI agents in production for use cases like step-up KYC. Standard KYC, last four of an SSN, quick reference checks and data checks in the background, fast-lane user, straight in, bam, they're using your product and they're onboarded. Slow-lane user, oh, something didn't match, let's pull a full documentary KYC, maybe step up to a manual review, 24 hours later, maybe the user gets in, but by which point your churn rate skyrockets, absolutely skyrockets.
90%, 95% of the time it's a false positive. It can be thousands of different edge cases, though. And this is one of those where a workflow is very hard to build for thousands of edge cases. An if-then-else statement is very hard to build for thousands of edge cases. It is very hard to build a rule to deal with thousands of edge cases, and machine learning models aren't good at edge cases. They can spot an edge case, but they're statistical models. They don't deal well with building a rational chain of logic.
Agentic AI is very, very good on building a rationale for why something should be a certain way, but it needs context. The problem with agentic AI is like, I'm an alien from another planet. I know a lot about the universe, but I don't know what you wish to achieve. So what we suggest in the agentic oversight framework is, have you written these things called standard operating procedures, SOPs? Because if you have, you probably didn't enjoy doing it, and the people who read them probably didn't enjoy reading them.
But to an AI agent, let me tell you, that might as well be the Bible because you have just given it everything it needs to do its job. It says, thank you very much. I know what this use case is. I would like to do one of those next, please. Can you give me a case to look at? And so in walks the next customer, alert is triggered, step-up KYC is triggered. And now what Sardine does is review that with an AI agent, and it will go through its thousands of edge cases. It will identify a rationale, and it'll present that to a human agent and say, I think this is a false positive, here's specifically why. Human agent goes, yes, I agree. Bang, user is onboarded.
And we've already found we can have 100% accuracy on false positives with AI agents. And that's compressing 20 hours of work a week at one fintech company down to two minutes. It's an unbelievable time save. It's that piece of taking a standard operating procedure and then making sure that everything the AI agent does is logged inside of a secure platform. You're not giving it API access to different systems. Don't just give this thing access to your compliance core or to any other core. Don't do that, please. Make sure it's sitting inside of a secure system. Make sure you can see everything it sees. Make sure you can audit all of its decisions, just so you can do the data science, just so you can figure out if this thing's performing and has done what it's supposed to have done and not gone off rogue and done anything else. And if you do all of that, that's essentially made you follow this process that we lay out called alert decision pathways. That sort of model is laid out in a lot more detail in the white paper.
Reggie Young:
Yeah. The props to Sardine, because I think you all do a really good job at driving home the value and this kind of stuff, just the use of data and statistics based on actual case studies that you all have run in partnership with some of your customers, just like, whatever, 49% faster time to revenue for customers. You talked about reducing going from, whatever, 14 hours to a fraction of that for reducing false positives. I'm just looking at some of the stats. You're correctly resolving at least 97% of false positives in KYC-type circumstances. So awesome data.
I love how you started talking about how it's resonating with bankers, because I read this and I was like, if I were in a bank, I would read this and be like, oh, this actually isn't that crazy, and we should start looking at this stuff because the paper does a really good job laying out the sort of safety guardrails that a bank will want. And so if you go to your bank partner and you're like, hey, I want to use AI agents for whatever, they're going to say, absolutely not. But if you go to them and say, hey, we have some automated agents that we have these safety parameters around and there's a human in the loop and all that kind of stuff, it becomes a lot more exciting, I think, to the banks. So folks should go definitely check it out.
Simon Taylor:
Exactly, because they're going to wonder about how to explain it to their regulator. So you've gone to your bank partner, they have to explain it to the regulator. There's two worries in the back of every banker's mind, which is, will the regulator ever allow it, and what if this thing hallucinates? We answer both of those dead on in this thing.
Again, the way to manage hallucination risk is the same way you manage model drift. You constantly vigilantly audit every single thing it's done. You constantly measure the outputs and also all of the inputs it used to get to its decision. And you maintain humans in the loop until you've received confidence in the data science to be able to extrapolate that and make it go faster.
Reggie Young:
Yeah. You're just preempting all my next question was to be hallucination risk. You perfectly addressed- I think it's a big fear to get over. There's also this other angle. I was listening recently to-
Simon Taylor:
Do you know what's crazy about that, hallucination risk? Kind of reminds me of the self-driving car thing. What if a self-driving car kills somebody? And now everybody's getting a Waymo. It feels a little bit different. It's the big fear everybody has, and then they actually end up killing way less people than a human would.
Tolerance for human failure is a lot lower than it is for machine failure. It's very, very odd psychology. And we've found that already these AI agents are far more consistent than human staff. Granted, they are given a much more constrained task set, but quantifiably, measurably, task for task, they are way more consistent because a human that's had a day off sick will come in one day and see a false positive and come in the next day and not see a false positive. The same human looking at the same data. I mean, it's utterly wild.
We've actually done this thought experiment with management teams where we give them all, a compliance leadership, the same case to work, and we get five different answers. It's absolutely wild. But the AI agent follows the standard operating procedures because they're right there and it has to, and the procedures are written down. So the question for humans is what's in the standard operating procedure and why.
Although here's another fun fact. The AI agents work really well in the second line as well as the first line. The first line of doing the job is one thing, but overseeing and collecting the data and adjusting policy and thinking about it, that's the hard bit of compliance sometimes. It's like you can get agents, and a lot of folks tend to outsource just like the working alert queues and that sort of thing. But like, what's our policy? What's our KPIs? What's all that looking like? AI agents are very, very good at that as well and collecting that data and making a rational overview of it. You could potentially see this getting quite interesting.
Reggie Young:
Yeah. I think a core piece of the hallucination fears it's missing is that humans make a lot of errors, too. I think about plenty of studies and judges that make harsher sentence rulings before lunch versus after. It's like that stuff, like an AI agent isn't going to do that as much. The Waymo comparison is a great one, too. There was a lot of fear, a lot of hesitation for self-driving cars. If folks don't follow Waymo on LinkedIn, they should because their marketing, the things they put out, is fascinating. They partner with insurance companies on like, here's the lower rates of accidents. It's fascinating data.
I was listening recently to a podcast with Tyler Cowen. I hadn't thought about this until he said it, but he points out that hallucination has significantly gone down the past six months. It suddenly hit me like, oh yeah, the outputs I see from the AI platforms, hallucination isn't really a thing I encounter. You still have to be vigilant. You still should double check if it's a super important thing. You gauge how much you check based on like, oh, am I just drafting a quick internal Slack, or am I drafting something that'll be used for material process? It's this fascinating phenomenon. Once he said that, I'm just realizing that hallucination is low.
I think financial services, you're making KYC decisions. You still really, really seriously need to take into consideration, but again, that white paper that Sardine put out, really, really good framework on how to address all that risk.
Simon Taylor:
Great point. I think hallucination has gone down, but I would apply the same, you always have to double-check everything coming out of an LLM.
Reggie Young:
Trespa verify.
Simon Taylor:
If you're reading this stuff, verify it for yourself, for the love of God. I'm one guy with a keyboard. There's only so much research a guy can do. Please don't take this as gospel. And I think that's just being a smart human. You need to use it as a shortcut, and you need to be able to trust most of it most of the time. But if that's a little spidey sense inside, he goes, hmm, then do that to a human, too.
Reggie Young:
Yep. My partner recently described me as a default skeptic. And I've really started to lean into that framing. You’ve got to think for yourself on everything that's fed to you. It's kind of my MO.
Last topic I'd love to jam on is fraud, because you and I were chatting before we started recording and you made the potent statement that fraud is getting worse. So let's jam on that topic. Why are we seeing fraud getting worse? I feel like you hinted at it a little bit with some of the Apple fraud example from earlier, but curious to double-click on that.
Simon Taylor:
Fraud-as-a-Service.
Reggie Young:
It's like FraaS? Is that what it is?
Simon Taylor:
So incredibly cheap and so incredibly simple to build scam emails that pass demo now. The tools that allow you to do it are now almost more or less single click and coin operated. You put the money in, it takes synthetic identities off the internet, and it goes and runs fraud with stolen card details. And then you just sit back and collect the money. It's unbelievably low cost and easy to do. So the ROI calculation on fraud has gotten a lot better since the advent of generative AI. They're the first to adopt these tools, and they're doing an incredible job of it.
And I think, frankly, we're fighting against AI-powered fraudsters with tools from the early 2000s. Especially the bigger the institution, the more likely that is, in that the fraud engines and the fraud rule engines and all that sort of stuff just cannot keep pace with it and they weren't built for it, whereas the fraudsters don't have to go through governance to adopt a new tool. They don't have to get any sign-off from any committees.
Reggie Young:
They don't need to do a third-party risk review of their tools they're using. Yeah, that's a funny point.
Simon Taylor:
So they're always going to be a step ahead. And then we are just extremely vulnerable in that most of our data has been breached. JPMorgan put out a piece actually sort of bemoaning third-party SaaS platforms through OAuth and some of the SaaS authentication. One-week SaaS tool inside of a corporation can take the whole thing down and lead to a massive-scale data breach. And so the adoption of SaaS has led to data breaches, which has led to a compounding effect of compromised data being out on the internet, which increases the amount of account takeovers that people can do. It increases the believability of scams, and it increases the amount of synthetic identities that can be created out of real fragments of identities. So you'd put all of that together and you've got this perfect storm of a defense side of the conversation that just is not equipped to fight back and an offense that has all of the data they could ever wish for and the tools and no compunction stopping them from adopting it.
I think on that basis, you've no choice but to start adopting AI internally. Every financial institution at some level is like, oh, does this mean I have to fire compliance officers? No. You need more of them, probably. But you also need like 10x that amount and 100x because it's just utterly crazy. If you follow the FTC's numbers, it has been growing at something like 35% to 50% year-over-year for the last five years. And that's outpacing any stock market in history. I think maybe only Bitcoin beats it. There are very few asset classes outperforming fraud right now. That's a problem. Now the industry is making noise about it in 2025. But ‘26, ‘27, ‘28, that chart starts to look really ugly, really, really quickly. I think people are going to have to start making some big decisions pretty quickly.
Reggie Young:
Yeah. That's crazy to think about, the compounding of fraud growth. Fun times. Awesome. Well, last wrap-up question for me. What's something that you've been thinking about a lot lately that you think people in fintech aren't talking about enough?
Simon Taylor:
Something I've been thinking in fintech, I don't know. I talk about a lot of stuff. I do think it's being talked about, but it's that sort of threat underneath the surface, which is- in the US anyway, that what if narrow banking comes from stablecoins, and what happens if- whilst there's less enforcement action coming at the federal level, maybe 50 states enforcing things at different levels is actually going to be even more challenging to deal with. The banks are worried about, oh, is there going to be narrow banking? Newsflash, we've already gone narrow banking. It's been around for a while.
And then I think California state, one of the trollers has come out with some sort of fine against one of the banks recently in the last couple of days. I expect to see a lot more of that. It's not like the cat’s away. It's just that there were more cats now. Crazy few years.
Reggie Young:
The California one's really interesting. They passed this week consent or against a bank that was from California, a fintech being sponsored from California without the federal regulator also entering a similar consent, or just kind of unusual. I think it's a sign of the times we're currently in.
Simon Taylor:
Yeah. So there’s that, and then Europe might be on the cusp of a comeback would be another one in that it was 2016, 2017, 2015, the neobanks and open banking, it was a European story and then it wasn't. There's an energy over this way at the moment. Look out because some very, very interesting little companies cropping up.
I think the next few years in Europe could be interesting. The US is like, oh, what are we going to do? Growth and economy, that's not us anymore. Europe's like, hmm, maybe we can do something here. So it could be a fun few years.
Reggie Young:
That is interesting. I do feel like I'm seeing early-stage fintech fundraising in Europe really heat up right now. I talked about that, but good point. Awesome.
Simon Taylor:
The joy of sitting in London sometimes I noticed these sounds a little early.
Reggie Young:
Totally. Awesome. Well, Simon, thanks for coming on. If folks don't subscribe to Fintech Brainfood, I'd be amazed if any listeners don't already. But they should go sign up. You can check out more about Sardine at sardine.ai. Simon, thanks so much for coming on and humoring me with some agentic conversations.
Simon Taylor:
I'm here for the agentic conversations. I hope we have more of them at Fintech NerdCon in Miami on the 19th and 20th of November. Tickets are available now.
Reggie Young:
Yeah. Go check it out. Great, great conference that I'm excited for.