Have you noticed how every week brings a new headline about AI-driven fraud, yet it still feels hard to tell what is real risk and what is noise?
In this Tech Talks Daily episode, I'm joined by Tommy Nicholas, CEO of Alloy, for a candid conversation that cuts through the fear-driven commentary and gets into what fraud teams are actually dealing with right now.
We start with a simple but important distinction that is often blurred. Tommy separates classic "fraud," where institutions take the hit, from "scams," where individuals are manipulated into handing over money or access. That framing changes how you think about solutions, accountability, and where AI is making things worse.
Tommy also shares why he believes fraud losses are often massively underreported. It is not because people are trying to hide the truth, it is because organizations rarely have a single, clean view of losses across every product line and channel.

Add messy labeling, split ownership across teams, and reporting becomes a best effort estimate rather than an objective number. That reality matters if you're building board-level narratives, budgets, or risk models on top of survey data.
From there, we talk about what organizations are getting right. Tommy argues there is no magical "undetectable" attack that forces teams to give up, but there is a very real breakdown happening in old fallbacks, especially human review of images and video.
The bigger shift he sees is banks and fintechs finally pushing for consistent tooling across every channel, web, mobile, branch, call center, support tickets, because fraud does not respect internal org charts.
We then get into why Alloy's AI Assistant is an interesting signal for where agentic AI is heading in regulated work. Tommy explains that agents are only useful when they have rigorous context, strong sources of truth, and clear workflows.
Otherwise, they guess, and "looks good" is not the same as "safe to run in production." He also lays out where agents can genuinely outperform humans, like scaling investigations during sudden surges, while keeping processes auditable and repeatable.
We close by looking ahead at agentic commerce, and why Tommy thinks the breakthrough will arrive through weird, emergent behavior rather than a neat protocol rollout.
When you listen back, do you think the next big leap in fraud prevention will come from better models, better data, or better operational discipline, and what would you bet on if your own customers were the ones on the line?
Useful Links
[00:00:04] - [Speaker 0]
Welcome back to the Tech Talks Daily podcast. Today, I'm joined by Tommy Nicholas. He's the CEO of Alloy. We're gonna talk about Alloy's new AI assistant, which is an agentic AI built for fraud prevention and compliance. Something that I find incredibly interesting because most agents that we're seeing creating at the moment are all about productivity or go book this, talk to that person.
[00:00:31] - [Speaker 0]
But fraud prevention and compliance feels like a refreshing change here. So together, we're gonna unpack why autonomous AI in risk management is a totally different challenge than the customer service and productivity demos that are flooding the market right now. And the reasons why, because that work demands context, audibility, and outcomes that you can defend. And I also wanna have a bit of a reality check around this technology because we keep hearing about AI powered fraud, but what are teams actually seeing on the ground? Is it as scary as those reports on on LinkedIn make it sound?
[00:01:06] - [Speaker 0]
And we'll also dig into some of the insights from Alloys state of fraud report, including the headline that one in five organizations lost more than $5,000,000 in direct fraud losses. And why that figure is probably underreported as well. And and what it says when 63% of their respondents doubt public fraud loss numbers that they gave, and 86% expect their underreporting to get even worse. Some big stats to get through, some talk of the problems that are being encountered by organizations, fintech, and financial institutions out there, but also how tech might be able to solve some of these problems. But enough from me.
[00:01:47] - [Speaker 0]
Let me introduce you to Tommy right now. So thank you for joining me on the podcast today. Can you tell everyone listening a little about who you are and what you do?
[00:01:59] - [Speaker 1]
So I'm Tommy Nicholas. I'm the CEO and cofounder of a company called Alloy, alloy dot com. We do decision making and orchestration for fraud prevention, money laundering detection, and credit decisioning and financial services. So, basically, anytime you sign up for a bank account or a remittance account or a trading app, you've gotta be verified to be a real person who's actually doing business in a way that they're legally allowed to, etcetera, etcetera. And all of those decisions have to be orchestrated in the back end as invisibly as possible by someone.
[00:02:34] - [Speaker 1]
And increasingly, over the last eleven years since we started Ally, that someone has been Ally.
[00:02:40] - [Speaker 0]
Fantastic. And there's so many different reasons I was excited to get you on today because I think we're all hearing more and more about AI powered fraud. But to to bring the topic to life, I'm curious. What are your clients actually seeing on the ground? Is it as scary as those reports make it seem on our newsfeed?
[00:02:58] - [Speaker 1]
Well, so one of the things about the fraud industry in general is it is the not just the media, but sort of the influencers, the vendors, the everybody involved is trained, and the whole apparatus is built to say the scariest possible things at any given time. So it is it is actually very hard to know anytime somebody says, oh, this broad vector or this group of scammers or whatever the case may be is growing or becoming difficult to detect. It's really hard to know what to make of that at any particular time because there is always something that influencers, vendors, media, etcetera, is saying is really, really scary about fraud. In this case, it is the it is definitely the case that generative AI has been a huge tailwind, like a a benefit it has benefited people who commit fraud. And, actually, there's two types of fraud.
[00:04:01] - [Speaker 1]
When I'm talking about fraud, I'm actually talking about two different things that are related, but not the same. There's a type of fraud where just in general, the person or organized group are trying to steal money from a company or or a financial institution, a bank, or or whoever. And then there's a type of fraud that actually the media tends to care more about, and I think for good reason, which is where they're trying to steal money from a person. So you can think of those text messages you get that are trying to scare you into sending a one time PIN to a to somebody else so that they can log in to your bank account and steal your money. That's we call I call those just to distinguish between these two things, I call those scams.
[00:04:41] - [Speaker 1]
Yeah. And I I when I'm saying fraud, I tend to mean fraud as distinct from scams. I tend to mean stealing money from a bank or an institution. So one of the ways in The US that most commonly happens is somebody finds a way to get access to or to open a bank account or a trading account or whatever the case may be, and they deposit money into that product, spend it, invest it, whatever. And then they say, either they sent the money from an account they didn't control, or they do control that account, but they lie and say, I never actually made that deposit.
[00:05:16] - [Speaker 1]
And now they've bought stuff, but they get the money whether it was from the person who they originally stole it from or not. The money goes back to where it originally came from in the form of a chargeback. And in The U US, in particular, there's a ton of very friendly laws that allow people to do this sort of thing. And that's a type of fraud where actually no individual lost any money. Although it might have been scary for the person whose money was originally stolen, but they will be made whole.
[00:05:46] - [Speaker 1]
So no person ends up really having had anything other than a scary moment, but one of the two or both of the institutions involved take a loss. So that that type of fraud and scams, you know, like, those have both been hugely benefited by AI in the wrong direction, meaning AI has made both of those things a lot easier to do. Scams in particular, though, tricking people into sending their money to scammers or to giving away access to their accounts. That is the thing that has really been transformed by AI. Because the number one thing you're trying to do when you're scamming somebody is you're trying to convince them that you're a person that they should trust, that they should listen to, etcetera, and you're trying to do that at scale.
[00:06:37] - [Speaker 1]
You're trying to do that as to as many people as you possibly can if you're a scammer. And generative AI has made that a lot easier, more convincing, and other things, especially for people who are less able to use their own intuition to figure out that they would be interacting with it that they might be interacting with an AI and not a person. So that's where it's been a huge it's been a huge benefit to the scammers. On the traditional fraud side, like the sort of getting access to things you shouldn't have access to or stealing money from banks and fintechs, etcetera, There's been some benefits that AI has brought to to those sorts of attacks. But, honestly, like, fraudsters have had really really sophisticated tools to attack the cyber defenses and the fraud defenses of FIs and and others in the in the industry for a long time.
[00:07:35] - [Speaker 1]
They've been there's been a lot of tools for them, deepfakes predating, you know, LLMs, deepfakes, you know, bot farms, device, you know, device farms that are sort of throwing throwing attacks at at various at various institutions. That's kind of been a thing for a while. And in in those types of attacks, AI has been an accelerant, but not a transformer. It's really the tricking of individual people that where AI has really transformed what's possible and made it a lot scarier.
[00:08:07] - [Speaker 0]
And before you sat down with me today, I was looking at your state of fraud report, which showed some pretty scary figures. One that particularly stood out was one in five organizations or institutions lost lost more than $5,000,000 in direct fraud losses. Huge number. Did that number surprise you when you first got the results of that research?
[00:08:28] - [Speaker 1]
One of the things about our fraud report is we actually tried not to make it scary. We try to make it accurate. Yeah. We take the position we always have taken the position in the market of there are lots of people trying to scare you about fraud. We're gonna try to tell you what we think is is true.
[00:08:43] - [Speaker 1]
That's really, like, our brand and our thing for various reasons, both because I think it's the right thing, but also because it's it's kind of the right position for us as an orchestrator. But this latest fraud report, I think, it it has the it has a lot of little nuggets that show you that fraud and scams are actually getting scarier and is actually a a growing problem in a way that's not just headlines, and that was one of them. Something I have bad news, actually, which is that we believe based on our own dissection of the answers in the report that the that the figures that our respondents in that report were using to say how much money they lost, particularly in The US, to fraud are almost certainly massively underreported. And here's the reason. When we go ask when you do a survey of any kind, what's the number one thing that determines what answers you get?
[00:09:44] - [Speaker 1]
It's how the respondents interpret the question they were given. Yeah. And in our fraud report, the hardest thing about the fraud report for us is that what we're always asking is not for your department or in your job. We're saying for your entire institution. That's what we're hoping for the answer the answer will be.
[00:10:05] - [Speaker 1]
But we know from some sampling and some asking from asking people who weren't in the survey, or we don't know if they were in the survey, but asking them how they would have interpreted some of these questions. A lot of people understand that they're trying to report for the whole institution, but most people don't even know the answer for their entire organization. So most we think that most people, we send out these fraud reports, give the best information they have, which tends to be whatever the scope of their own role is. So it could be you own deposit fraud, deposit you know, checking accounts, DDA accounts, current accounts. You might own that part of the business for your financial institution or fintech.
[00:10:51] - [Speaker 1]
And so that's what you report because that's what you know. But there's all of these credit losses over here in some other p and l and some other department that have fraud within them that you that might not be properly segmented from pure credit losses, so nonfraud credit losses. Nobody may even know the difference between what credit losses were fraud and what credit losses were just, you know, inability to pay or in nonintent to pay and circumstances changed or whatever. So even when we get these losses that are reported to us, it's based on the perception of the respondent at the time and what what they have access to. And in no case I've never heard of a case where somebody told me, well, here's how to answer that question.
[00:11:38] - [Speaker 1]
And the way they answered it was to deliver a larger number than the actual number. It's always there's some caveat that leads it to be a smaller number. So fraud losses for both individuals and institutions are are definitively underreported. There's no question that they're underreported. I have been seeking what could we do to figure out how to properly report them, and I don't have an answer for you.
[00:12:01] - [Speaker 1]
I wish I knew the answer. I would pursue it if I knew the answer, but there is there's no chance that there is no chance that fraud losses at financial institutions and fintechs are properly reported. It's just not it's it it it isn't it isn't it isn't the case. I do think that in The UK and in Europe, there are certain types of fraud losses that may be properly reported because they flow through fairly rigorous pipes. Like, for example, APP fraud, you sort of it it's a pretty distinct workflow.
[00:12:38] - [Speaker 1]
Like, how APP fraud would get reported and tracked, etcetera, is fairly reportable and understandable. The same is true in both The US and in The UK of sort of chargebacks on on debit and credit cards. That's a line item you can pull out of a database and say, okay. There were this many chargebacks. We labeled this many of them as fraud.
[00:13:00] - [Speaker 1]
But even then, so many chargebacks end up and especially in so many chargebacks end up getting labeled one type of fraud, but, really, they're another type of fraud. So it's just there's so much that we just don't know, and it comes down to the respondent, what data they have access to, labeling, things like that. It's just a it is a really hard problem, but it's it's certainly true that fraud is underreported.
[00:13:25] - [Speaker 0]
And you guys work with more than 800 companies today, and I'm curious from on the flip side of this, what what are they getting right about fraud prevention this year? We we hear a lot of the doom and gloom, but there's there's a lot of people getting this right too, and a lot of your customers were with working with you. Right?
[00:13:43] - [Speaker 1]
Yeah. Well, of course, I would say that they are getting one thing right, which is working with Alloy, but that's the most that's the most obvious selfish CEO statement you could ever hear. And I actually do think that there are a lot of ways. I I am I can come bearing some good news. I have a lot of good news, in fact, which is that I'm I made a mention of something that I think is really important earlier, which is that in the world of fraud and of independent of individual people getting scammed, the sort of sophisticated coordinated attacks on institutions, fintechs, etcetera themselves, there have been and there continue to be very strong defenses against against those attacks, and AI didn't change any of that.
[00:14:34] - [Speaker 1]
There is nothing there is no one type of attack that I have seen our clients bring to us and say, AI did x, y, or z, where we did not have a there is a solution in the market that either if you upgraded it, maybe you have need to upgrade the version, may maybe you need to add something you didn't have before, but there is no attack that I've ever seen where the answer was this was undetectable. This could not be detected. That is not a thing that has happened. And it could happen. There are some theories I I I have I've heard the I've heard, for example, a theory that adversarial networks will get sufficiently sophisticated that they will be able to generate images and videos that are undetectable even by the most sophisticated fraud prevention algorithms, for example.
[00:15:31] - [Speaker 1]
But that hasn't happened yet. And part of the reason that it hasn't happened yet is there are no like, LLM, there were already methods for generating extremely believable images and videos and things like that. Like, there were deepfakes before we had the techniques we're using now. There were you could just go create create falsified IDs by hand. There's human labor.
[00:15:59] - [Speaker 1]
Like, there's a bunch of different there were already a bunch of different ways to create fake or falsified images. So it was already the case that the fraud prevention industry had to account for fake images, fake fake videos, people impersonating people, farms of individuals who are who are, you know, creating creating fraudulent accounts, and they're actually real people. So you can't just go and say, are they a bot? Because they're not a bot. They're a person.
[00:16:27] - [Speaker 1]
They're just not the person they claim to be, etcetera. These things are already always are already existed before what we're referring to as AI, so generative AI, LLMs based things, etcetera, GANs. And nothing changed with that. But the problem was there there were all these fallbacks. Like, for example, this is something that is completely broken.
[00:16:50] - [Speaker 1]
There were all these fallbacks where I was like, yeah. I use these fraud prevention measures over here. And then in some instances, they sorta don't work, and I have a human go and review the images and try to figure out if they are fake or real, etcetera. Well, that doesn't work anymore. So there are that true that truly will not work.
[00:17:08] - [Speaker 1]
I mean, there's nothing there's nothing a especially with the best sort of fake impersonations and images and stuff like that, humans aren't gonna be able to go and say, well, my automated system over here wasn't sure, but I'm a lot more sure because I'm, like, looking at it with my eyes. There are instances where human investigator judgment is incredibly helpful and in fraud prevention, but a lot of these more wrote while the machine fails, and so we kinda toss it into a queue and humans look at it and they make a judgment, especially on images and videos in particular. That's not gonna work anymore. And so what really what what really has happened was not that there weren't solutions and ways of solving some of these problems. It was that almost everyone had gotten away with suboptimal solution, like suboptimal compared to what they even could have been doing at the time without any additional innovation, just more rigorous implementation or thorough implementation or buying the extra tool or implementing the tools they already had everywhere instead of just in some places.
[00:18:15] - [Speaker 1]
And there's been a big push. I'd say the biggest trend that we see is a big push for things that fraud teams and security teams have advocated for for a long time, such as thorough and consistent implementation of tooling. We can't have one set of tools in web digital, a different set of tools in native mobile digital, a different set of tools or no tools in branch, a different set of tools or no tools in call center, a different set of tools or no tools in ticketed support requests. Right? These are all places where somebody could be committing fraud, and they they kind of all need to get stitched together, be consistent, be rigorously implemented.
[00:18:57] - [Speaker 1]
And I think if you go to the average bank or FI, they have access to all of the things that would be required to stop the fraud that they're seeing. But they've had the constraint of the product road map of and budgets and other things and other priorities and not believing their fraud teams or whatever the case may be. And so they have a patchwork of, well, we kinda knew we should probably also have this thing over here, but we don't. And so we fall back to these processes that aren't really working anymore. Like, there's a lot of that.
[00:19:30] - [Speaker 1]
And so I think two pieces of good news to summarize my answers to your question. One is there's no attack that can't be detected and prevented without throwing the baby out with the bathwater like all good customers also get blocked from doing. There there's no there's nothing in that bucket except for people who have been so thoroughly convinced to do something not in their best interest that they're just gonna do it anyway. So people who have been scammed, it's very difficult to stop people who have been scammed from completing the scam. Like, you know, if you've been convinced that you need to send money overseas to get your child out of prison or whatever the scam is, it's very difficult to break the spell of that and stop them from doing that.
[00:20:11] - [Speaker 1]
But separate from that stuff, there's no attack that can't be detected, prevented efficiently, effectively, etcetera. And things that fraud teams, security teams, even product teams that care about this stuff have been saying for a long time of we've gotta have consistent, thorough, up to date, best in class tooling across all of our channels, and we can't just say, oh, but the channel doesn't matter. You know, we've got humans in that channel that'll stop the bad things from happening. There's a lot of unwinding of that thinking, and I'm seeing budgets and road maps getting opened up. And I have been for really the last couple of years to say, look.
[00:20:48] - [Speaker 1]
Okay. We get it. We've gotta get this stuff everywhere. We've gotta have enterprise wide consistent solutions to fraud prevention, etcetera. And then the bad news is I think everyone is sort of seeking what's the future of stopping my customers from getting scammed themselves even though we're not the ones as the institution responsible for that necessarily, but we wanna be responsible for it.
[00:21:14] - [Speaker 1]
Because another thing you'll see in the fraud report is customers look to their banks, brokerage firms, fintechs as sources of stop me from doing bad things. They actually do see their banks and financial institutions specifically as playing that role. They don't really see anyone else as playing that role, but they do see their financial partners as playing a role in stopping them from doing stupid and bad things. Scams are a difficult one there. Right?
[00:21:45] - [Speaker 1]
How do you stop somebody from doing what they intend to do with their money because they've been scammed? But I do think it's clear that that's going to be the relationship that people have with their banks, financial institutions, etcetera, and and folks are looking for solutions to that problem for sure.
[00:22:01] - [Speaker 0]
And over the last twelve months, I've been fortunate to hit the tech show floor at conferences at UK, Europe, US, and even The Middle East. And one of the big talking points is agentic AI, custom built agents, etcetera, and it's a trend that's continuing this year. It got me thinking, we're talking about productivity gains and things, but couldn't it be used for to solve bigger problems? And that's one of the ways that I found you because you've just launched the Alloy AI assistant, which essentially is an agentic AI for fraud prevention and compliance. So I've not heard of anybody taking this angle before, so it's incredibly refreshing to hear.
[00:22:38] - [Speaker 0]
But what makes building autonomous AI agents for risk management different from those other use cases that we might be seeing on the market?
[00:22:46] - [Speaker 1]
So something that's under the covers really important to know about agentic systems is well, actually, three things that are really important to know about what goes on under the covers in agentic systems. The first is that agentic systems are most effective when they have really rigorously defined context and background for the work that they're trying to do. So let me give you an example. If you were to go to the most advanced if you were to take the chat interface behind the most advanced latest frontier model, whether it be, you know, talking to Claude or talking to ChatGPT and setting it to the pro deep thinking version of the latest and greatest model and giving it thirty minutes and all the time it needs and all the access to everything it needs. And you were to say, hey.
[00:23:37] - [Speaker 1]
Can you go perform a review on this sanctions hit that I just got from my sanction system that says I might be doing business with a Russian oligarch or something like that? And that was all you gave to it, and it was the best. It had access to it could make autonomous decisions. It could go talk to the Internet. It can go it can go I think I it can ask clarifying questions.
[00:24:01] - [Speaker 1]
It can do anything. You're still not gonna get a particularly usable output out of that prompt because the system in that context wouldn't know what your sanction screening standard operating procedure is. What you what what you do what you do and don't consider a a correctly assessed potential hit for a sanction screen. Like, it will not know what else has happened with this customer and its app in in their application or their usage of your products. It just isn't gonna know enough.
[00:24:37] - [Speaker 1]
It's gonna be guessing. It's gonna have to go guess and guess and guess and guess and guess and guess and guess to try to give you an answer. And what it's gonna be trying to do is give you an answer that you'll say, hey. That looks good. That's really the output of without without further background prompting, etcetera.
[00:24:54] - [Speaker 1]
A a LLM is trying to get you to an answer that you will define as, like, looks good. That's really how they work, and that's what they're trying to do. And you can't stop. And the only way to stop an LLM from or an agentic system in general from guessing is to give it ground truth and a lot of kind of rigorously defined clear background that when it needs to go answer a question that it perceives you're asking, it'll go it'll go first to sort of the source of truth, and it will have to go no further. It will not have to go guess because it couldn't get to the the source of truth.
[00:25:29] - [Speaker 1]
And so the same applies to if you were trying to say, hey. I've got a business I'm trying to onboard, and I was able to complete nine of 10 tasks to onboard them, but I have this one other task that I need to get completed. Well, what is the definition of that task? How has it been completed in the past? What what information do I have up to that point about the business, and how authoritative should I treat that information?
[00:25:54] - [Speaker 1]
That sort of inform that sort of background and context is genuinely critical to to being able to put an agent into a place where it can actually do autonomous work that's effective and high value. And so that's one thing that Alloy as the system that already knows you know, we already have a system that runs automated processes, hits a point where we need to hand it over to a human agent, gives them rigorously defined, complete, auditable, trackable, traceable background for why did I get here? Why am I why am I being pulled into duty here? And what's everything that happened over before me? And, also, what's everything that's gonna happen after me?
[00:26:35] - [Speaker 1]
So because we already had such a system, putting in a either a human plus an assistant or an autonomous agent, both of which are things that we can deploy that has that same context and maybe even a deeper way. It knows where it came from. It knows why it's been it can intuit and reason around why it exists in the first place. What is this workflow that I'm in? Why did I get called into duty?
[00:27:00] - [Speaker 1]
What's everything that happened before? What's everything that's gonna happen after? Because, oh, by the way, if I give an answer that's not in the format and with the content content and context that's gonna be able to continue this process, then it's irrelevant that I gave a really good answer. Because now the output is, can I give an answer that allows me to continue the process I'm trying to automate, not an answer that looks good to a human in a chat interface? These are kinda two very different things.
[00:27:26] - [Speaker 1]
So this this sort of context is key has been one thing that we've we've observed. And, actually, I'll that was kind of two of the things I wanted to hit on. The third is I think the agentic systems that are gonna get the most adoption are gonna have two qualities. One is the quality that I just described, which is they have access to source of truth, like, sources of truth. They don't have to guess.
[00:27:52] - [Speaker 1]
You can tell that they can follow a standard operating procedure without going into guessing land in a way that would be different than a human would. Humans have to guess too. There's nothing wrong, and it's a good thing that that LLMs will will make educated guesses. But there are all sorts of things that we're trying to get agents to not guess about because they're the same things that you and I wouldn't guess. We would look up and try to find the source of truth and say, I if I couldn't find the source of truth, I just won't continue.
[00:28:19] - [Speaker 1]
So that's one that's one type of quality that agentic systems at work will have to have. The other quality they'll have to have to be effective is they're gonna have to deliver a better outcome in some way, a more complete lower risk or higher accuracy outcome than what humans can do. Because we actually are pretty good at building systems that give humans work they can complete fairly quickly and and efficiently. And in fact, it's it's notable to me how often I'm talking to financial crime leaders, for example, or fraud prevention leaders where we're saying, oh, would you like to automate and forget with agents or without agents. Would you like to automate something that you're doing?
[00:29:00] - [Speaker 1]
And they say, yeah. It'd be nice to automate it, but we're actually pretty efficient at this already, believe it or not. Like, we've actually already gotten it to the point where three people can kinda do all of this, and they're really good, and we know how to interface with them. So it's not even always the case. Sometimes it's the case that manual work's getting out of control and can't even get completed, etcetera.
[00:29:21] - [Speaker 1]
But a lot of times it's the case that there are alternatives that are actually pretty efficient. We've gotta deliver something else other than just automating work that's currently being thrown to humans for this to be valuable. And fraud prevention and financial crime, especially at onboarding when you're trying to get customers onboarded and at sort of ongoing maintenance of customer relationships where things change and you've gotta triage them and figure out if there's new information you've gotta act on, etcetera. Both of those have huge opportunities where AgenTic systems can actually do something in a way that the human the human systems can't. The most obvious example is ensuring that every single one of the hardest things in financial crime and fraud prevention is ensuring that every single thing that somebody should look at was looked at.
[00:30:13] - [Speaker 1]
Because it's simply a reality that if you have a three person analyst team doing fraud prevention investigations, And one day, there is a TikTok. I'm thinking of this example because this was something I was dealing with yesterday. And then one day, there's a TikTok trend that's like, here's how you commit fraud at institution x. Right? This is the thing that every FI, especially in The US, deals with this all the time.
[00:30:38] - [Speaker 1]
And it there there's the TikTok could be there's a there was Chase had the money glitch where it there was this big TikTok trend where it was like, oh, Chase ATMs will give you money if you do x y z, which is really fraud. That what they were instructing you to do was commit fraud, but, you know, it was portrayed as this money glitch. So let's say you are the next financial institution that has a TikTok trend, and all of a sudden there's alerts coming into your queue from everywhere. Well, it's everyone understands in the whole industry that what's gonna happen is you're gonna have to prioritize. You're you're just gonna have to prior you're gonna have to go say, okay.
[00:31:16] - [Speaker 1]
Prioritize by dollar amount. Prioritize by geography. Prioritize by will we be able to recover that. Like, you're gonna have to prioritize, and it's probably the case that you won't even get to the stuff that you deprioritize. And it just is what it is.
[00:31:29] - [Speaker 1]
And if you do get to it, you're gonna get to it in some higher and outsourced firm to go clean up all of those alerts for safekeeping and whatever. But you're you might not get to a huge amount of the work that you need to do because you have to prioritize. Well, if you implement an agentic system, you you have two things that that are true. One is you can guarantee that all of that work, whatever the work was, it could be investigate all of these alerts. It could be, you know, send a notice to whatever it is.
[00:31:56] - [Speaker 1]
You can be sure that all of that work will get completed, and the prioritization of what you need to put human agents actually on can be done in a way that is flexible and iteratable because you can go and say, hey. I need to tweak my agent quickly for this trend that I'm seeing to go do the work all in parallel and summarize this one extra piece of information, which is should this get prioritized for a human agent or or whatever the case is. That is a quality of an agentic system that is important. Because now instead of saying we have the same system but with AI agents, which may or may not be appealing, it it might probably is appealing. Maybe there's cost savings.
[00:32:36] - [Speaker 1]
Maybe there's not. You have now we have a human and agentic system that is more scalable, more rigorous because we are at least sure that all the sort of, like, grunt work that we want done in these in these investigations is done. Because the agent will do all the grunt work and prepare everything, and it'll always follow its standard operating procedure. And you can be sure that that happened, and you can be sure it happens every time. And we are getting to a more compliant or or ultimately better outcome.
[00:33:06] - [Speaker 1]
Financial crime, money laundering detection, and fraud prevention is filled with use cases where this is this is the, like, this is the quality that we can get out of an agentic system, where we can both get a better, more scalable, more rigorous outcome, and we can potentially do it in a way that is maybe lower cost or what a lot of our clients want, higher less bandwidth constrained. So they can go run a marketing campaign without having to go to their operations teams and say, hey. Go make three extra hires before we can run this operate this marketing campaign. That's a tough, real constraint a lot of financial institutions have that you can you can start to unwind and make a more durable, flexible system. So that's that's what we that's what gets us excited about fraud prevention and agent agents and agentic systems and fraud prevention and compliance.
[00:33:58] - [Speaker 1]
And the last thing I'll say is this is a use case that is really exciting to financial institutions, fintechs, and banks. It is the first call cent I'd say accelerating support ticket resolution and call center interactions seems to be the first obvious use case that most banks were pursuing as LLMs got more mature. Hey. Can we have our customers who are calling in have an even better experience, or can we deflect some of their queries using AgenTeq systems? That that was the kind of first obvious thing.
[00:34:35] - [Speaker 1]
To me, this has become the second obvious thing, which is that the scalable scalability, rigor, and ability to manage the work or accelerate the processes in fraud and financial crime. That seems to be the second thing that everyone's sort of saying. That's yeah. We're going to adopt AI for that. And we were really excited to find that we were in a particularly unique position to do that having already kind of have the 99% of what's needed to build an effective agentic system inside of Ally, and we've seen adoption rapidly as a result.
[00:35:08] - [Speaker 0]
And out there for consumers, I've also heard of new type of agents that help consumers complete a transaction for an item that they're waiting for for that price to drop and and go in low at the last moment. And this has got to introduce a whole new set of complexity for financial institutions as well. Do you think we'll ever eventually need a version of alloy that that maybe detects good agents from bad? Do you think we'll ever get to that point too?
[00:35:33] - [Speaker 1]
I think the thing that a lot of folks are talking about that is not here yet and is not actually a practical thing that's happening as much as it's sort of if you were to go on LinkedIn, you would see you would think if you just read LinkedIn
[00:35:48] - [Speaker 0]
Yeah. Yeah.
[00:35:49] - [Speaker 1]
You would think everybody just goes to ChatGPT and says my coffee, please. And then ChatGPT has coffee delivered. Right? Like, that's just what you would think is happening. And I understand why that is because that's that's what people perceive as coming, and so therefore, we're building toward that, etcetera, etcetera.
[00:36:08] - [Speaker 1]
I think what's true for sure is that because that's not actually how people use agents today to complete transactions, everyone's sort of trying to reason around how you would create a system. You'll see things for Mastercard and Stripe and, like, everybody's releasing a new protocol for for sort of agent to commerce is really that's the problem people are trying to solve is once money is actually getting exchanged. We have all the problems we already have with money getting exchanged except now. Instead of it being a person, it's it's a bot. And we've built everything in fraud detection to say bot bad.
[00:36:48] - [Speaker 1]
If not bot, maybe still bad, but we can figure it out. But bot bad. No bots. Like, that is thing number one. And what everyone's trying to unwind is now bot, maybe not that.
[00:37:02] - [Speaker 1]
What do we do now? But because it's not actually, like, really a practical thing that's happening, it's mostly at the experimentation and protocol level, not at the consumer adoption level. I'll make a prediction myself, which is that I think the only way that we end up figuring out how agentic commerce is gonna work is going to be through some emergent behavior, not a top down designed behavior. What I mean is there's gonna have to be a way. So all of these plans that are coming out from these folks who deliver agented commerce protocol one and agented commerce protocol two and all of these ways of sort of identifying a bot so that it can complete commerce transaction or whatever.
[00:37:48] - [Speaker 1]
There before we know how people actually want to move money on the Internet using LLM based agents. Somebody's gonna have to find a way to make that happen without requiring a new protocol to get written and implemented by because it's just not gonna happen that everybody goes and implements these protocols, and they work and and whatever. And that sort of how would that happen? Well, an example of how that might happen is something like PlaudBot type implementations where somebody figures out a way for an LLM to sufficiently hack around. That's not specifically what I think will it will be, but something where somebody figures out a way for an LLM to hack around the constraints of a system such that it truly can look to people like it is a human doing human things.
[00:38:42] - [Speaker 1]
And then, of course, we'll all rush to block that behavior and stop it because, again, bot bad. But if it works for long enough, you'll see how people actually want to use these agentic systems to complete commerce. Basically, there's gonna have to be some moment where agentic systems that consumers use just work for commerce, and then we'll figure out how people actually wanna use this stuff. Right now, we're all just guessing. Because if you told me today, if you told me right now, you can go book a trip, you can go give the best LLM based system the ability to go book a trip for you end to end and then tell you when it's done.
[00:39:17] - [Speaker 1]
I wouldn't do it. I I wouldn't do it. I it's not a big enough pain point for me. Like, I can use ChatGPT to do all my research. I can like, it it's just not a big enough pain point for me.
[00:39:30] - [Speaker 1]
And I don't think it's a big enough pain point for almost anyone. So it's not quite at the point where you're just seeing people, like, in mass trying to do this stuff. Go, That's how they wanna do it. That's what it would look like. You know?
[00:39:43] - [Speaker 1]
Yeah. So so I think there's gonna have to be some sort of emergent behavior where we all go, uh-huh. That's how they wanna do it. And the reason I know that this is the case is bit one is pretty much always the case. And two, that's how commerce happened on the Internet.
[00:39:56] - [Speaker 1]
Like, went and said, here's how we're gonna do commerce on the Internet. People just started doing commerce on the Internet and it in the least safe way possible. In the early two thousands and nineties, you were just typing in your credit card information to an unencrypted website, and then they were I believe, I have this right, they were calling it in as if it was a phoned in credit card because you used to do this all the time. You would call a restaurant. They say, what's your credit card?
[00:40:21] - [Speaker 1]
It still happens. What's your credit card number? You read it to them. That was how commerce actually happened on the Internet initially, which was you typed in that information, and then it went to a person who then phoned it in as if it was a card not present transaction. That was the early and people did it.
[00:40:35] - [Speaker 1]
Not everyone did it, but people did it. So you said, okay. Clearly, what people wanna do is type their credit card in on a website and then buy stuff. Now we can go build the infrastructure to make that work, to make it encrypted and tokenized and safe, and we'll build little UIs that help people understand that they're safe versus not safe. And we'll build HTTPS, and we'll train people to know that that's encrypted and anything on HTTP is not encrypt like, there was all these things that had to happen.
[00:41:06] - [Speaker 1]
But we knew what we were reverse engineering towards because we saw people doing it. We you could tell that people wanted to type their credit card on in the credit card onto the Internet, and then get buy buy things and have it delivered to them. So we weren't, like, floating around in the dark trying to figure out what we wanna do. I think agentic commerce isn't at that point. I just don't see people saying, look.
[00:41:29] - [Speaker 1]
Look at all these people doing agentic commerce some hacky way. They're trying so hard to make it work. They want it so badly. And something that I think is true about all breakthrough innovations is somebody's gotta be desperate. Somebody has to want to do something really badly.
[00:41:46] - [Speaker 1]
There has to be something somebody somewhere has gotta really, really wanna do something for for, you know, persist for for the persistence that it requires for a real breakthrough innovation to come to life. And I'm just not sure that I know what use case there is where agentic commerce where where the transacting of money between agents in some level is so urgent that it's gotta happen. And so I I predict it will be something emergent. And last part of this prediction, it'll be weird. It'll be very strange.
[00:42:17] - [Speaker 1]
We'll all consider it. You'll know that this is you'll know we're on the precipice of agentic commerce. When you hear about something that people are doing consistently, not every person in the world. There's some group of people that are doing it consistently. It looks like agentic commerce.
[00:42:33] - [Speaker 1]
Nobody designed it for them. It just they figured it out, and it's super strange. It's a very weird thing that they're doing. Then then I'm gonna go, okay. Woah.
[00:42:44] - [Speaker 1]
That's gonna be the future because the future always starts weird. It always starts emergent, and it always starts sort of where you didn't expect it. So I I doubt it will be agents booking travel for us because that was the first idea we all had. So it can't be that. It's gotta be something else.
[00:43:01] - [Speaker 1]
That's my
[00:43:02] - [Speaker 0]
Yeah. I completely agree. I was reading an example a few days ago of a guy that was creating an agent. He was testing it, and he ended up it ended up costing him $33, but he went off and bought some eggs without his permission. So it's like Okay.
[00:43:15] - [Speaker 0]
Just spent $33 on eggs. I didn't want that.
[00:43:18] - [Speaker 1]
Yeah. I thought that Claude Bot stuff was like that that was the first thing. I mean, I lot of problems with with OpenClaw and Claude Bot and Accelera, but I thought that was the first time where I was like, okay. This is really strange. And that might be the this might be what this feels like the start of something because this is this is weird.
[00:43:38] - [Speaker 1]
Whereas so much what we were doing with agents before that was just better search. Like, Google created all the technology that ultimately became LLMs
[00:43:48] - [Speaker 0]
Yeah.
[00:43:48] - [Speaker 1]
To build better search, and then it turned out to get really commercialized first by OpenAI and blah blah blah. Like, who cares? But Google created this to do search better. And the initial use case for LLM the the the the two use cases that basically really mattered for LLMs in up until even right now were better search with better memory and a better interface and all. But it's really search.
[00:44:16] - [Speaker 1]
Like, you go to ChatGPT and you do things that you had always wished you could have done in Google. It's just Google showed you ads and didn't give you answers and all of this stuff. So now we do it in Google or ChatGPT or Claude or whatever. So that's one. And then the acceleration of text based creative work, which is mostly coding, turned out to be but that was an emergent thing.
[00:44:37] - [Speaker 1]
Like, when they launched ChatGPT, they didn't know it was gonna do code as well as it could. And you you may remember in the early days of ChatGPT, the Internet blew up unexpectedly with, oh my god. ChatGPT can do code. But that I don't even think they marketed that when they launched the app. It was just something that all of a sudden Twitter and X or whatever was just filled with, it can do code.
[00:45:03] - [Speaker 1]
This is crazy. I gave it code, and it did the code for me. And it it built I don't think it was building apps, but it was definitely reading code and rewriting it and reformatting it and translating it from PHP to Ruby and from Ruby. Like, it was it could do all these things. Maybe they marketed that as one of their early use cases.
[00:45:19] - [Speaker 1]
Can't remember, but it definitely wasn't what they thought people were gonna use it for. And then all of a sudden, that was all you saw. The it's these emergent things that tend to be not just I think, basically, have to be where we get the breakthroughs from, because we're just like, our needs and what we want and how what what we actually can use and how it fits, it's all just weirder than it's all just weirder than you think it's gonna be. It's never as as you planned.
[00:45:45] - [Speaker 0]
One of the things I love about listening to you today is your passion, enthusiasm, and excitement for the space. It's like you've just landed here. But, I mean, it was last year, I think, you reached ten years with Alloy. So what is it that keeps you
[00:45:59] - [Speaker 1]
Oh my god.
[00:46:00] - [Speaker 0]
Motivated in this space?
[00:46:03] - [Speaker 1]
Well, I think one of the things that's been really nice about building out so now we're on it might be, I should know, but it could be today. This might be our eleventh anniversary or maybe it's
[00:46:14] - [Speaker 0]
Wow.
[00:46:14] - [Speaker 1]
I think it might be Sunday. I think it's February 22. I should really know. The one thing I well, I should really know. But it's in the next few days.
[00:46:21] - [Speaker 1]
Maybe it's even today. Shame on me for not knowing. But it'll be it'll be our eleventh year here in a little while. And so that's more time than all of kindergarten through elementary school. So I've I've been at alloy I was at one school from kindergarten through eighth grade, so I've now been at alloy longer than I was at the school I was at the longest in my entire life.
[00:46:44] - [Speaker 1]
That's kinda crazy. One of the things that's been nice about alloy is that the work that we do I I can't think of a big exception to this. The work we do is genuinely helpful. I I don't think anybody's arguing that we shouldn't have that we should just let people commit fraud, let people get scammed. There are people who argue that we should let people launder money and that we shouldn't have sanctioned individuals.
[00:47:07] - [Speaker 1]
There's people who make that argument, but that's a small contingent. So in general, most people everyone is on the same page that we shouldn't have fraud and scams. Most people are on the same page that we we shouldn't have money laundering, and we should sort of enforce the the money laundering and sanctions laws that we have. So the work that we do is constantly helpful. And the the other thing is it's constantly changing.
[00:47:32] - [Speaker 1]
I mean, there's been no period. This part I don't always love day to day, but it's sort of in aggregate is what keeps you going. There's always some new thing. There's always some new challenge. And when when ChatGeeBT came out, I I knew immediately, well, I don't know what this is gonna be, but this is not business will not be the same five years from now in some way, for sure.
[00:47:55] - [Speaker 1]
And the challenge over really the intervening three years was in what way? What exactly is going to be the way in which it changes? We know now that at least one of the ways it's going to change is that there are these there's all these automated workflow based systems of which Alloy is a well known one for what we do that have automated automated automated machine learning machine learning machine learning. Again, all of that's the same as it's been, although just better and better and better. Then human stop, now continue automated automated machine learning machine learning.
[00:48:27] - [Speaker 1]
And we know that in those workflow based systems, the these these human stop processes will be assisted by or or delegated to agented systems. That's that's a for sure thing that will definitely happen, and I think will mostly be good. What else will happen with machine learning, biometrics, AI? Like, that's the thing about our space is everyone sort of all anybody talks about is generative AI, LLM based, and maybe you you you put GANs in there and some of this other stuff. But, like, generative AI, that's all anybody talks about.
[00:49:02] - [Speaker 1]
But the advances in deep learning to power machine learning, predictive machine learning, and unsupervised machine learning have been incredibly interesting and exciting. The trend in our space of there being actual rigorous, usable, effective, government based digital IDs where we don't just have to do a probabilistic thing to figure out if you are who you say you are. We can do a deterministic thing because you can, like, do the equivalent of putting your ID into the Idemia machine at the airport online. I mean, that hasn't totally taken yet, but it's definitely happening. And governments are issuing these things that are gonna be become more usable, and Apple and Samsung are building ways to deploy them.
[00:49:44] - [Speaker 1]
Like, that has nothing to do with AI, and it's incredibly interesting. You know, what's going on with the the regulatory landscape changes all the time in ways that I mostly probably don't wanna comment on, but this has been a lot this has been a really interesting eighteen months for regulatory change, especially where I live. So, you know, like, it this this business, this industry is is constantly evolving and constantly changing. And one of the things that we talk about a lot with our customers is why do you even buy Alloy? Like, that is the reason you buy Alloy is because you just know for a fact.
[00:50:20] - [Speaker 1]
It's it's one of the hardest things for fraud teams to get across to their management, which is, yes. We just stopped the latest fraud attack. Yes. Also, we need to go make investments for the next fraud attack. And they're like, well, we're not having any fraud right now.
[00:50:33] - [Speaker 1]
Right. Because we just went through remember that fire drill we just went through that you thought was gonna sink the company? It's just gonna happen again. Right? Well, that's the that's what you buy from Alloy is future proof fraud prevention.
[00:50:44] - [Speaker 1]
Like, you know when the next innovation comes. You're not gonna have to do a big gut reno of the whole system to go deploy it. Alloy will just make that work for you. That's really the value prop of Alloy is this future proof state of whatever tool comes out next, we'll get it to you and help you deploy it, and you won't have to do anything to make that happen. Whether it's, again, agents or a new biometric tool or digital IDs or whatever a consortium, whatever the case may be.
[00:51:11] - [Speaker 1]
That being the value we sell to our customers inherently means that we are having to constantly evolve. And that I don't think if you told me eleven years ago I think if you told me eleven years ago that I'd still be doing this, I would have said, well, then let's not do this at all. Because if it's gonna take eleven years, I'm not up for that. And that's actually so thank god I didn't know that. But what keeps me day to day going, well, like, let's do this for another eleven years is the world is so much weirder and different more different than it was eleven years ago that I know that the next eleven years will also be, especially in this space, extremely weird and requiring of constant iteration, thought, and problem solving, and those are the things that I really enjoy doing.
[00:51:54] - [Speaker 1]
Last things I last thing I would say is the peep I really enjoy the people I get to work with. At Alloy, we have a a kind, thoughtful, incredibly smart group. And, you know, I it's a it's great to be able to go to an office every day and be around those people. And so the problems we get to solve being incredibly dynamic and good for the world, then the people I get to work with being people that I want to do good work with, keeps me going and hopefully will keep me going for another eleven years or more on this journey.
[00:52:25] - [Speaker 0]
I'm sure it will, and I think that's a powerful moment to end on. But before I do let you go, anyone interested in exploring anything, we talked about today, whether whether it be the report, the, the AI agent, or anything at all that you're doing at Alloy, people wanna dig a little bit deeper. Where should they go?
[00:52:44] - [Speaker 1]
Well, so I would yeah. I think, you know, our website, alloy.com, is a great place to find all of that content. And then if you ever wanna give a shout or start a conversation with us, you can find me on LinkedIn. I'm I'm always happy to talk. We we post a lot there and and have some good conversations.
[00:52:58] - [Speaker 1]
So find me on LinkedIn. Happy to talk. But the fraud report is on our website at alloy.com, alloi.com. And and, of course, I'm always happy to talk. Thanks so much for having me on.
[00:53:10] - [Speaker 1]
I really enjoyed this conversation. It's it's been it's been great chatting with you, Neil.
[00:53:14] - [Speaker 0]
No. I really enjoyed it too. I think at some point in the future, we need to have a game of FIFA as well, and maybe it could be wolves derby, derby, carry. But more than that, I'll add links to everything. I ask everyone to check that out.
[00:53:26] - [Speaker 0]
But, more than anything, thank you for sharing your story with me today. Really appreciate it.
[00:53:30] - [Speaker 1]
Thanks, Neil.
[00:53:31] - [Speaker 0]
A big thank you to Tommy. I really enjoyed this one, especially his clarity on the difference between fraud and scams and why AI panic headlines can hide the real operational gaps. Anyone listening, I do recommend that you check out the fraud report that we mentioned at the beginning of the podcast there. There'll be a link in the show notes for that and also links to Alloy's AI assistant and everything else we talked about. They'll all be there including a link to Tommy.
[00:53:59] - [Speaker 0]
So a big thank you to him for bringing his enthusiasm, passion, excitement, and dedication to this space. Here's to another eleven years. Hopefully, I'll get to grab a beer with him in Manhattan at some point. But more than anything, thank you for listening, and I'll see you all again tomorrow morning. Bye for now.

