What does it actually mean to prove who we are online in 2025, and why does it still feel so fragile?
In this episode of Tech Talks Daily, I sit down with Alex Laurie from Ping Identity to talk about why digital identity has reached a real moment of tension in the UK.
As more of our lives move online, from banking and healthcare to social platforms and government services, the gap between how identity should work and how it actually works keeps widening.

Alex shares why the UK now feels out of step with other regions when it comes to online identity schemes, and how heavy reliance on centralized models is slowing adoption while weakening public trust.
We spend time unpacking the practical consequences of today's verification systems. Age checks are regularly bypassed, fraud continues to grow, and users are often asked to hand over far more personal data than feels reasonable just to access everyday services.
At the same time, public pressure around online safety is rising fast. That creates an uncomfortable push and pull between tighter controls and the expectation of fast, low-friction access.
Alex makes the case that this tension exists because the underlying approach is flawed, and that proving something simple, like age, should never require revealing an entire digital identity.
From there, the conversation turns to decentralized identity and why it is gaining momentum globally. Instead of placing sensitive data into large centralized databases, decentralized models allow individuals to hold and present verified credentials on their own terms.
For me, this reframes digital identity as a right rather than a feature, and opens the door to systems that feel more privacy-aware, inclusive, and resilient. We also explore how agentic AI could play a role here, helping people manage, present, and protect their credentials intelligently without adding complexity or new risks.
With fresh consumer research from Ping Identity informing the discussion, this episode looks closely at where trust, privacy, and identity are heading next, and why the choices made now will shape how we prove who we are online for years to come. Are we finally ready to rethink digital identity, and if so, what does that mean for all of us?
Useful Links
Tech Talks Daily is sponsored by Denodo
[00:00:05] Welcome back to the Tech Talks Daily Podcast. Now, a quick question for you all. No matter where you're listening to me in the world, have you noticed how every conversation around digital identity seems to hit the same wall? Some people see digital ID cards as progress. Hey, if you've got nothing to hide, you've got nothing to fear. And I know me saying that comment out loud will trigger so many other people listening as well who see it as a slender.
[00:00:35] And everyone else, I think they just feel lost in the noise. Much like the world of politics right now with the left versus right going at it and nobody being able to seemingly sit down and just have a sensible conversation.
[00:00:51] So what I wanted to do today was try and imagine what it would look like if we were to pause the debate and simply strip it back and understand where identity is heading in a world that is increasingly shaped by AI fraud and this shifting trust. And that is exactly what today's guest, Alex Laurie, is going to be doing.
[00:01:17] He spent years working at the intersection of trust, authentication and public expectations. And this year alone has shown how quickly attitudes can shift. For example, the UK's proposed BRIT card almost overnight has raised questions about transparency and cost.
[00:01:37] And at the same time, three quarters of trust. And large numbers would rather quit social media than risk having their identity stolen. But at the heart of everything, no matter which side of the fence you are, public trust keeps slipping while the threat landscape continuously grows.
[00:01:59] And tension shapes every decision that is being made in this space. So I do think we're going to have to think about identity in a world where software is beginning to act for us. Because agentic AI and these swarms of agents that companies are working on mean that AI agents can make payments, access data and complete tasks without our human input.
[00:02:24] So this alone opens up new opportunities, but also, if we're honest, new attack paths and new expectations around verification. As the phrase I use on here so many times is the last best experience we have anywhere becomes that standard expectation that we expect everywhere. So what would that mean for verification and our expectations?
[00:02:49] And as identity becomes more than a way to check who we are, it almost becomes the accountability layer for both humans and machines. So Alex and I are going to explore what all this means without assuming that one model or one philosophy has the answer. Instead, we're simply going to look at what is happening, what is changing and what we might need to rethink.
[00:03:16] Because if trust is drifting, if verification is failing and if AI is starting to act on our behalf, the real question becomes incredibly simple. How do we build identity systems that people believe in? Now, before we begin today's interview, and there's some great insights in that, I just want to give a special mention to my friends at Donodo, who are passionate about the future and logical data management and agentic AI.
[00:03:44] Because everywhere you look, agentic AI is undoubtedly the next big shift. But here's the truth. It can't operate on messy, inconsistent or siloed information. Enter Donodo and their logical data management. Because with Donodo, you can create a unified govern layer that connects data across your lake house warehouses, across your apps and clouds instantly and without duplication.
[00:04:08] This means stronger AI governance, faster lake house acceleration and reusable data products that your teams can trust. So from CIOs to domain owners, everyone ultimately benefits. And in the lines of Bruce Springsteen, nobody wins unless we all win. And with Donodo's ecosystem of partners, you're also able to scale even faster. So if you want AI that doesn't just automate but operates, start with logical data management at Donodo.com.
[00:04:36] And now it's time for me to officially introduce you to today's guest. So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do? Hi. So my name's Alex Laurie. I'm the go-to-market CTO at Ping Identity, which really means that I'm out there with our customers and our partners,
[00:04:59] understanding their needs and then bringing sort of the latest and greatest around identity and access management into the market. And, you know, I occasionally do interesting podcasts like yours, Neil. So it's great to be here. Thank you. Excellent. Well, thank you for joining me. Do you have your own podcast? Is that possibly a 2026 thing or not? We've been talking about doing sort of a little bit of an expert's hour just to bring specific topics.
[00:05:28] But it's more about getting the time to do it. Yeah, I know. I know exactly what you mean. And I'm curious. Obviously, we're recording this at the end of 2025. When you look back at this year's developments in all things digital identity, is there anything that stands out most about how public trust is shifting or already shifted? And what does it mean for UK's ambitions around national digital ID?
[00:05:54] And just saying those words out loud will cause massive division across the listenership. But what are you seeing here? You're absolutely right. It's a triggering conversation for many, many people and both sides of the divide. I think what we've seen this year, which is fascinating, is like a huge increase in public awareness around the risks, around the problems. We've seen very large, significant breaches.
[00:06:21] We've seen data leaked unintentionally, as well as we've seen proactive activity by malicious actors and very organized. When you combine that with the sort of fear and uncertainty and doubt that's caused by AI. And again, there's AI for good and AI for bad.
[00:06:42] And, you know, I think people are starting to realize that the ability for malicious actors to use AI to rapidly accelerate some of the sort of hard, heavy lift activities that they might have done in the past. You combine that with the public awareness of the big breaches, the big hacks that have happened in the UK, in the US, across the world. I think people are more aware.
[00:07:07] And I think when it comes to that sort of British sensibility around digital identity, I think it's sort of elevated the concerns that people have around the current initiatives. Yeah, I think if we put the political side of things to one side for a moment, if just look at what we've experienced this year with the amount of high-scale hacks that we've seen. I think Land Rover, what costs over a billion in lost revenue.
[00:07:36] M&S, just about every company we can think of, has been hacked at some point. It feels that the sensible option would be just to be very careful before putting every citizen's data online or a digital identity. It just seems a sensible approach. But, of course, the proposed brick card does raise questions around cost, clarity, and data handling. So, from your viewpoint here, because you do have a unique vantage point, how should governments be thinking about digital ID design?
[00:08:06] So, it supports trust rather than undermining it from day one. Because it seems like it just happened overnight. Hey, we're going to do this without any proper thought. Or at least that's what it might look like to many people listening. Yeah, so I'd probably come down on almost pro-government in this sense, but actually pro-organisation, pro-doing things properly.
[00:08:28] You know, we've worked with governments across Europe around wallet schemes and around verifiable credentials, which are hugely important. The argument I'm going to make is, if you look at it from the perspective of what's the cost of not doing it properly, that far outweighs what we may spend as a country to deliver some form of capable, verifiable, and secure identity system.
[00:08:56] The arguments you get against it are the cost, and people have privacy risks in there as well. That's one of their big arguments against it. There's actually an active group in Europe trying to combat the implementation of digital wallets. And they believe there's a privacy concern. But actually, when we talk about how to do it properly, if you follow the standards of verifiable credentials and decentralized identity done well,
[00:09:25] you use the industry standards, you follow the design patterns that are established, then what you actually end up with is something where the user, the end user, the citizen, has more control over what they share. So that's the really strong thing. The thing that has to be done with it is also a huge amount of education. And when I think about my mom, my dad, family members who are not geeks, not like myself,
[00:09:53] not like we live and breathe this every day, but I think about normal people who, some people maybe they're digitally disadvantaged, and some people, young people, and again, people who are not living this every day, then the education piece becomes hugely important. The activity like this, us having a conversation about this, becomes very important. It really does. And I've spent some time in Estonia over the last couple of years, and they seem to have this completely nailed.
[00:10:21] And when I was talking about concerns that the British public have at the moment, they seem almost baffled and bewildered, saying, well, why would you not want this? And they seem to be full on with it. And one person even said to me, how do you know how many people in healthcare, whether it be doctors, nurses, or hospitals, that have seen or accessed your data? You don't, whereas in their system, you know exactly who's looked at it.
[00:10:47] But I'm curious, if we digress for a moment, we are seeing a rise in AI-powered phishing, fake government sites, and identity scams. So how is the changing threat picture, or how is this changing the threat picture? And what does it tell us about the weakness in current identification systems? Because there's a lot of fear about where we're going or where we could go, but there's a lot wrong with what we have at the moment too, right?
[00:11:15] Yeah, I think what we can see is the adversary in the middle type attack, which is what you just mentioned, is you create a fake website. It looks really good. AI can build that. You can... We heard about the scandal in Hong Kong where, you know, someone created a fake CFO, you know, digital, you know, a fake digital person to have a conversation on a Zoom call. How do you know it's actually me on this call with you, not my AI avatar?
[00:11:42] So I think the ability of AI to create and deliver those adversary in the middle attacks, the sort of fake sites, the fake people, et cetera, et cetera, means that what we have to do is think about things beyond that original sign-in moment. Many years ago, Carter was coined by Gartner, which was sort of the concepts of continuous assessment of threat. And if you think about verification now, so you have the authentication moment.
[00:12:12] It's I've signed into a service, but is there something before that authentication moment, right, looking at risk? Is there something during that authentication moment that verifies that I am who I say I am at, you know, doing the thing I should be doing? And then beyond that initial authentication moment, are we continuously then verifying the user? If you think about some of the vectors that these attacks have taken this year, the big, you know, the big name attacks, they've actually gone in through the help desk,
[00:12:42] used social engineering to get a reset of a password. And then once you're in with the password, you're sort of, you're in and you can migrate sideways within an organization. So they're coming in through that employee account reset, account recovery process through the help desk. Now, if you then get your username and password, you're in. How do we continuously verify? How do we check? Actually decentralize identity, you know, having the concept of a credential on your device that proves who you are
[00:13:11] actually becomes a very powerful thing to use again and again and again every time you interact in a high value transaction with an organization. And if we look at the global political stage, it does feel that there's almost a global mistrust of politicians at the moment. And elsewhere, of course, I think we all know someone that's been the victim of a hack or a breach. And predictably, your research shows only a small portion of the public fully trusts an organization
[00:13:39] to manage their identity data. So I guess there's no surprises there. But what do you think's driving this erosion of trust? Because trust is what we need to make any future plan possible. So how can companies rebuild that confidence that has been worn out over time? And I was reading recently about the damage of a hack or a breach, for example. It can take up to five years to restore reputational damage, for example. So how can we rebuild this trust?
[00:14:08] You actually mentioned it earlier on, which is a point I really liked about Estonia, is that concept of transparency. Yeah. So if you can read and understand and know what's happening to your data, to your activity, I think that really helps you as an individual have trust in what's going on. I liken it to a good dentist versus a bad dentist. A good dentist will tell you what's going on, keep talking to you as you're going through the process, because you're stuck there with your mouth open and you can't do anything.
[00:14:37] And I think technology is a black box sometimes and you can't do that. So to have that transparency, see what activity is happening. The other thing is, I think, when you look at where the identity data is stored and originated, in many places, we actually trust our banks. We do trust our banks. We've been banking with our banks for a long time. So do our banks and our telcos and do our other services that we believe in and trust in,
[00:15:07] who are actually very heavily legislated and regulated to protect our data, do we trust them potentially more than the government? And then in that scenario, can we build our, let's call it our digital identity, out of a combination of verified credentials that come from various suppliers like banks. So I think there's a, there's something we can do there where, you know, my bank, I've been with my bank since I was 14. They really do know me very well.
[00:15:37] You know, they know if, you know, and they have very good systems to, to, to adapt to if, you know, they see behavior that's not normal for me through my transactions. Um, so again, using that concept of various signals from various providers to, to build our digital identity presence, um, into verifiable credentials, I think is a really good way to build trust. You raise a great point about there, about trusting in our banks. Well, even people that might disagree with that,
[00:16:05] they probably had the same bank account. I'd like you said there since you were 14, 15, 16, but looking forward, decentralized identity, promises a way to prove exactly what is needed, but without exposing everything else. So what, what is it that makes this model suitable for a world that desperately want stronger safety, but at the same time are very reluctant to surrender any elements of their privacy. This is something I've talked about quite a lot. And I, I do really think it's,
[00:16:35] it's something that we need to help everyone understand. Decentralized identity. In other words, you carry the credential, the, the, the identity, the components of your identity in your mind. Mobile wallet, but then you choose what you share. So the really good example is I go to a hotel. I just, you know, they, you know, the moment they take a photocopy of my passport. I'm not very happy about that. Right. So instead I can just tap and that this is, this is Alex Laurie. Um, if I go to, you know,
[00:17:05] I'm in the U S and I want to go and go to a bar and they, they, they, they look at this 50 something year old and go, Oh, you, you know, I need to prove your age. You can prove your age. You can prove that you're 18 or 21, but not have to share your actual date of birth. And when you think about like in a safeguarding perspective, that's really important when it comes to, to, you know, young adults who may want to go to a nightclub and may currently have to show their driving license with a, with a, you know, their address on it. Um,
[00:17:34] so choosing what you share, how you share it to whom becomes the real power of decentralized identity and carrying your credentials with you. And you can have all sorts of credentials. You can have, you know, temporal credentials, the, the, uh, a temporal credential, you know, digital identity to use a vehicle, for example. Um, so, you know, there's a real power in that for me. And also this year, just changing the subject slightly, everyone's been talking about agentic AI.
[00:18:02] I was at AWS a couple of weeks ago, which was all about Amazon providing the services and the, the partnerships to help businesses create swarms of AI agents that act on our behalf. And it got me thinking as AI agents begin to do things like make payments, access data, and act on behalf of users, possibly as early as next year in the mainstream, how should we be thinking about our identity as something distinct from the human who created or, or even deployed them?
[00:18:32] Cause it's, it's going to be something that's going to keep on coming up next year and beyond, I would imagine. So in the industry, we have a concept of non-human identity. Um, and traditionally that was used around either sort of privileged services. So, you know, when you're talking about, um, you know, uh, service to service, uh, or machine to machine communication. Um, um, and in some instances, IOT as well. And the idea you have like, you know, very constrained IOT devices that don't have much power and you have less constrained
[00:19:01] devices that have quite a lot of power. What we're, you know, as, as the industry are really pushing forward is that you should treat an AI agent as a first class identity is like, almost like give it the same concept as a human. You have the concept of an identity. You have the concept of verifiable credentials. Even you have the concept of scope. Um, in other words, what this agent can and can't do life cycle management. So, you know, when, when I have an agent that I used to do shopping for me,
[00:19:31] it, it gets registered on, on the, on the shopping website on, you know, on the vendor's website, and it's tied back to me as a human being. So you then have the concept of relationship identity, relationship management. These are all very strong identity patterns, um, that we already have out there in the market and identity security assurance and, and protection. And so what we should do with our agents is treat them at that level.
[00:19:59] A foot first class identity governed, managed, um, and very much constrained with the least privileged access. And of course we often talk about AI misuse in broad terms, but the idea of an agent in the middle, almost introduces a new type of risks. Am I overreacting here? How real is that concern? And what kind of guardrails are, are needed to prevent misrepresentation and unauthorized action?
[00:20:26] Because everyone seems just generally excited about these swarms of agents going out there, but is there, should we be concerned? I think we should be concerned. And it's, I don't want to be a scaremonger, but we have seen, I mean, I think a couple of weeks ago, um, there was a organized hack where someone used, um, anthropics, claude to design and build some, some, some hacks and, and run them. Um, and if you have a, uh, overprivileged agent, um, you know, they're sort of like,
[00:20:57] unruly smart teenagers who can sort of try and find their way around things. Um, and we, we, we see, you know, there are stories in, in the press and, you know, generally in the industry where someone's deployed something with very good intent. Um, and you know, the agents trying to work out how to do it and it's sort of going off and doing things it really shouldn't be doing. Um, so yeah, we should be concerned about it and we should really think about putting those patterns in place that lock it down, give it very, very, very constrained scope.
[00:21:26] And as I said before, very, you know, limit the access. Um, now one of the, one of the things is you have the concept of an agent creating another agent to do a subtask. Um, so then that orchestration of agents needs to have the concept of identity, security, access, um, and governance in it too. And as you said there, there is a lot of doom and gloom out there. And in the past, I've challenged listeners on this podcast to give me a list of positive films, positive futuristic movies,
[00:21:56] and where the technology almost creates a utopia rather than a dystopia. We always see the latter. So in a desperate bit to try and restore the balance in the universe, you've said that the future depends on trusting only what has been verified. So if we look into the future and have a, maybe an idealistic look at how this may evolve, what, what would a mature identity framework look like where both humans and AI agents can operate inside the same digital trust system?
[00:22:25] Give me a positive example to, to finish on a high note today. Yeah. So I think there is, again, as I said, the patterns are there. Um, and we talk about zero trust architectures, and that's actually a very positive thing. It means that, that means you have to establish trust to every moment during the life cycle of, of the agent, the human, whoever. Um, and we also talk about the concept of continuous verification. So if, if you can get to a situation where, uh,
[00:22:54] every interaction is, is challenged, right? We, you know, TCIP is, you know, you know, has a, has a two way challenge within it, belt into it as a concept. Um, and if we can follow that pattern through and continuously think about making sure that Alex is Alex, Neil is Neil, that Neil's agent is Neil's agent. And, you know, every time something happens that, you know, hits a certain level of context or risk or transactional value that we,
[00:23:23] we go back and we verify and make sure we're doing this, put the other sort of behavioral, because I mean, I, you know, we have behavioral biometrics for humans. Now, the concept biometric may be the wrong term for agents, but certainly behavioral monitoring and behavioral understanding of how they should be acting could be brought into this. So verification, behavior, and then context. I think we can actually establish a very strong framework for security. This is a highly engaged topic right now.
[00:23:52] Everyone has an opinion for anyone that would like to carry on this conversation with you, or just find out more information about your work and check out what you're doing or connect with you or your team. Where would you like to point everyone listening? Um, for my, uh, sort of general public and conversations, I tend to, uh, put things out on LinkedIn. So I'm Alex Laurie, all one word, uh, LinkedIn. And then, um, you can also come and look at us on, uh, online at ping identity.com. Awesome. Well,
[00:24:22] I will post links to everything that I would encourage people listening to check you out, carry on this conversation. I think these, so much progress can be made if we all just sit down and have a conversation about this stuff and learn from one another. So I urge people to do that. But more than anything, just thank you for starting this conversation today and shining a light on it. Really appreciate your time. Well, thank you very much, Neil. It's been a great conversation. Wow. I think that was a, just a fascinating conversation with Alex and,
[00:24:49] and leaves me with plenty to think about because digital identity is no longer some narrow technical topic. It touches how we bank, how we access public services, how we prove our age, how our children navigate the web. And soon, our AI assistants will interact on our behalf. And as Alex explained, people want safety, but they do also want control. They want convenience, but they worry about overreach.
[00:25:19] They want their privacy protected. They enjoy AI tools, yet fear the fraud that those very same tools can create. It's almost a paradox, isn't it? And public trust keeps shifting and the systems meant to protect us are coming under pressure from every direction. And of course, none of this gets solved with any single policy or platform. It is much more complex than that. And that is what makes this moment so important.
[00:25:49] And conversations like this, without those knee-jerk reactions and firing insults at either side, the idea that identity now includes both humans and autonomous software, I think this is something we're going to be revisiting many, many times in the years ahead. Because if AI is going to complete tasks for us, then we need ways to verify what is acting, who it represents, and how those decisions are made.
[00:26:17] And that future is already forming. And the frameworks that we choose today could shape how safe people will feel tomorrow. So I'd love to hear what stood out for you. Do you see digital identity as an opportunity? Opportunity? A concern? Or something still too undefined to form an opinion on? And what do you think trust should look like in an AI-driven world?
[00:26:44] This is a much bigger topic than do you believe in digital ID cards or not and why. I think we need to dig a little bit deeper on this and not just talk about what we want and what we don't want, but what is the best way forward? What do you think is the best way forward? We can all have an opinion and complain about things, but what do you see as a better way forward? Please share your thoughts with me. Let's keep this conversation going. And while you marinate on that, I'm going to walk off into the sunset. I've lit the fuse.
[00:27:14] Now I'm going to walk off, but I will be back again tomorrow with another guest about a completely different topic. But as always, let me know. Other than that, I will return to your inboxes tomorrow. Get me on LinkedIn at Neil C. Hughes. I'll speak with you all again very soon. Bye for now.

