In today's episode of Tech Talks Daily, I reconnect with Abnormal AI's CIO, Mike Britton, to explore one of the most pressing topics in the tech world—AI regulation and cybersecurity.
Speaking from his base near Dallas, Mike brings a pragmatic perspective shaped by decades of experience at the intersection of enterprise technology and security. As the debate around artificial intelligence evolves, we examine the growing divide between the United States and Europe on regulatory approaches and what business leaders can learn from each side.
While Europe takes a more cohesive, application-based approach, the US remains fragmented, relying on state-by-state policies and sector-specific laws. Mike unpacks why this patchwork complicates global alignment and what an effective risk-based, standardized framework might look like. He argues that regulation must focus on how AI is applied, not just its scale, especially as the technology becomes embedded in everything from healthcare to email automation.
We also touch on the unintended consequences of overregulation, including the risk of pushing innovation into regions with fewer safeguards. As Abnormal Security works with some of the world's largest brands, Mike offers a frontline view into how threat actors are already leveraging AI to outpace traditional defenses. His insights reinforce the need for transparency, human oversight, and "kill switches" to ensure AI remains a tool for good, not a liability.
From real-world examples to strategic recommendations, Mike outlines what CIOs and CISOs need to know now. His advice is clear, grounded, and actionable, whether embracing regulatory sandboxes, staying alert to geopolitical nuances in AI models, or maintaining continuous learning in a fast-moving space.
So, how do we keep innovation ethical and secure in a world where AI is moving faster than ever? And what steps should technology leaders take to avoid falling behind or losing control of the tools meant to drive progress?
[00:00:04] What happens when the rapid evolution of artificial intelligence collides with the ever-changing demands of cybersecurity? And regulators, they're scrambling to keep up. This is one of the many questions at the heart of today's episode. Which was recorded at the backdrop of a global debate on AI oversight, risk and innovation. And joining me today is Mike Britton. He's the CIO of Abnormal AI.
[00:00:34] A company using AI in some of the most sophisticated ways that I've seen to protect major brands against everything from phishing and email-based threats. And with over six years' experience at the helm of a business built on behavioral AI, my guest has a front row seat to the challenges and the opportunities that are emerging as governments, companies and attackers all race to harness this technology. So we're going to explore everything from fragmented nature of AI regulation,
[00:01:04] while in some cases Europe seems more coordinated, while the US grapples with a patchwork of different states' rules. Also, I want to talk about why context should always be king when it comes to AI oversight, and how we get that balance between innovation and regulation. So if you're wondering what responsible AI governance looks like in practice, or how to balance transparency with innovation,
[00:01:30] or what a more unified global regulatory model might look like, today's conversation is packed with all that insight and much more. So if you are a CIO, CISO, tech leader, business leader, just wondering how to separate AI substance from marketing noise, this episode is for you. But enough scene setting for me. Time to get Mike onto the podcast now. So a massive warm welcome to the show.
[00:01:57] Can you tell everyone listening a little about who you are and what you do? Sure. My name is Mike Britton, and I'm the CIO of Abnormal. And my day job is running all of our internal security, IT technology, and probably a few other random things that get put on my plate each day. Love it. And what's the story behind Abnormal and the name and everything there? Anything you could share around that? Yeah, absolutely. So Abnormal is about a six year old cybersecurity company.
[00:02:26] And our founders came from the world of ad tech and AI. I like to joke that we were AI before AI was cool. And what we do is we protect some of the world's largest brands and companies from advanced phishing emails, social engineering attacks, and anything that may be lurking in their inboxes through behavioral data science and AI. Fantastic. There's so much I want to talk with you about today, in particular, before you came on the podcast.
[00:02:53] I was reading that you believe that the UK and indeed across Europe, there's this we're leading the charge when it comes to AI regulations. So I've got us obviously, I'm talking to you in Dallas, Texas today or just outside of Dallas. What lessons do you think could be learned or should the US be drawing from this approach in the UK and Europe? Yeah, it's it's always complicated. I feel like Europe has done a great job on leading the way in privacy and now AI as well.
[00:03:21] I think part of that is it's a little bit different setup politically. The way the EU is structured and even post Brexit UK, they still, for the most part, tag along with the rest of the continent. But, you know, I feel like they've taken a very practical approach to AI. They're looking at it based on the application of AI, not just AI in general. I think sometimes people watch something on TV or in the movie theater and get spooked by AI.
[00:03:51] And there are definitely some spooky things that could happen with AI and oftentimes over over rotate on the pendulum. And I feel like Europe has taken a pretty balanced approach. You don't want to stifle innovation, but you do want to have some controls in place. And you do want to have those controls kind of tailored to the application and use of AI. I think here in the US, it's a little different.
[00:04:14] We have big tech here and big tech tends to lead the way and we tend to operate on kind of a self regulation principle. Or in if you go back to privacy, we also take a very industry specific approach. So Europe with the GDPR, it's a blanket uniform approach. It's very sensible, makes a lot of sense. And here you've got things like HIPAA for health care data.
[00:04:39] You've got GLBA for financial data and it comes in pockets based on the type of data in the industry. And what that leads is that leads to a patchwork. And then you have states like California. Now there's probably at least a dozen or more other states that have come up with their own privacy laws and regulations. And we're starting to see that with AI as well. And what you end up with is you end up with 50 plus different laws and regulations that may be similar, may overlap, may conflict. And it makes it very difficult to do something on the federal level.
[00:05:09] And I think that's what we're going to see with AI as well. But hopefully, hopefully with the EU AI Act, I think that's a good foundation that the U.S. can borrow off of and take that sensible approach. And it's interesting you say that, that you don't want to prevent innovation. And there's an old saying that the U.S. innovates, Europe regulates and maybe China replicates. But do you think there is that real different approach?
[00:05:36] And maybe in Europe we do sometimes over-regulate and end up ruining innovation. It's quite a fine balance, isn't it? It is a fine balance. And I think that's where sensibilities need to reign here. Just because you can do something doesn't always mean you should do it. And that applies to both sides. That applies to innovation and that applies to regulation. And I think part of it is technology needs to really just be transparent.
[00:06:05] And if I'm a consumer or if you're using my personal data for things, as long as I have an understanding and there's some sort of equal footing between the technology provider and the consumer, I think it's okay. I think you can lend a lot more towards innovation.
[00:06:24] I think where it becomes kind of, I don't want to say dirty or dishonest, but it's you're using my data, you're making decisions and I have no clue or concept on how that's being done and I have no control over my data. I think that's the other side of it.
[00:06:42] And I think we've got to find some middle ground where people aren't just harvesting your data and doing what they want to do, or you're not being denied things like financial services or healthcare or anything else like that, because without any understanding of the algorithms or why you're being denied service or why you're being treated differently. A hundred percent with you. You've got to find that middle ground. And I don't think there is this one size fits all approach or silver bullet to everything.
[00:07:09] And interestingly, I read before you came on the podcast today that you've said that AI regulation needs to focus more on context than model size. So can you unpack why that distinction matters so much, especially from a security perspective? No, absolutely. And I think that's where things like chat GPT and Claude and some of these large language models have kind of driven a lot of the news cycle. And they can do some amazing things.
[00:07:36] But that doesn't necessarily mean there's something inherently bad based on size. A lot of it does depend on how you're using the model. I can have an extremely small model that's used for very disingenuous or nefarious reasons. And that doesn't mean size isn't the only determining factor. It's also what the models use for. How is it used? Where's the human in the loop?
[00:07:57] When I go back to what abnormal does in our product and our services, we strive to be very transparent to the security organizations that use our product. We also have a human in the loop. So anybody in the security company that manages abnormal for our customer has the ability to override the AI. So things like that, we feel like that's important factors to help make it safe to also kind of take some of the mystery and some of the concern away from a security perspective.
[00:08:27] And I'm curious, do you think regulatory sandboxing might strike a balance between encouraging innovation and safeguarding against any unintended consequences? And if you do, how do you see that playing out? How should business leaders approach this? Yeah, I think it's important to be able to to fail fast and to test and really see what the limits are.
[00:08:50] Once again, when you're doing that, I think it's important to make sure that you're you have safeguards around that to make sure you have controls to make sure you have monitoring and and ultimately go back and look at the results that are being driven.
[00:09:04] Make sure if there's any biases that are are happening as a result of the algorithms, if the AI is behaving in unintended manners, just really important to kind of lean in and understand exactly what's going on as you're developing new AI systems. We're talking a lot at the moment around a rising global conflict, almost chaotic in some areas.
[00:09:28] We've got obviously Brexit here in the UK, we've got Europe and the different approaches across the US as well. So in your view, what risks do you think countries or regions face when AI regulations are so deeply fragmented or misaligned across borders? Yeah, I think part of the problem and and and we've seen that with some of the you look at DeepSeek, it made a lot of news.
[00:09:53] It made a lot of news because of how cheap it was, it also started spurring people looking at, hey, maybe I don't need to use open AI or Anthropic or Nvidia or these other, I'd say, main stage kind of technology companies, maybe I can go use DeepSeek. But then you start digging in and understanding, well, maybe it's not everything it's cracked up to. And and oh, all of a sudden, there are also stories that they've copied a lot of what ChatGPT does.
[00:10:21] And if anyone's played around with it, it's still controlled by the Chinese government. So there's also some built in biases and and go ask it to go ask DeepSeek what it thinks of Taiwan and and phrase any question around the country of Taiwan. And it's it's going to spit back the official Chinese policy on Taiwan or ask it anything about China. And it's going to give you a very government Chinese government focused type answer. And so clearly there's some bias built into there.
[00:10:51] There's some lack of transparency in how they're using it. So I think when you look at regulation, if you drive people to other solutions, you may be lacking the controls there. You may be lacking some safeguards and security that we really we want to make sure that as back to that balance, you want to make sure you're not driving regulation too deep so that you don't send innovation other places where the safeguards won't be there.
[00:11:16] And with the US often slower to act on tech regulation, often with a stronger focus on innovation and pushing the tech forward instead. Is there anything at stake? You think if it doesn't move to align more closely with emerging UK and EU standards? And I understand just asking that is it's such a complicated question, because on the flip side of that, I'm hearing talk of here in Europe that, hey, if we overregulate, we could end up falling behind in the AI race, for example. It's such a complex issue, isn't it?
[00:11:46] Yes. I feel like every day the world is getting more complex. And when you think the world was crazy a year ago and you fast forward to today and it feels even crazier. And I think that's where you've got to be careful. And without getting into politics, you've got things like tariff wars and other things going on. You've got regulations. And so I think the world in general is better if the technology innovation from the US is making its way over to mainland Europe.
[00:12:12] But I also think US technology companies also need to understand that it's a different marketplace and you've got to strike that balance. And so I think it's important. I've often said, if you really, at the end of the day, if you take away the iPhone and you take away the Android and you take away the Internet and a lot of the things that are coming out of the US, that's not a good place for Europe. That's not a good place for European citizens. That's not a good place for consumers. And so it's all about striking that balance.
[00:12:41] And if we want to police ourselves and we want to self-regulate, then we need to do a good job of it. And otherwise, we need to – and I think that's what you've seen on privacy. And I keep going back to privacy because I think at the end of the day, GDPR was a big boogeyman. But GDPR actually made a lot of sense in many ways. I will argue it's also created a lot of annoying things like pop-ups on websites. And I argue that nobody goes and reads those. They just click through.
[00:13:07] So I would argue what the purpose of those were other than being compliant to regulations. But I think in general, GDPR was a good framework. And you've seen it copied in other things like California and other privacy regulations across the globe. So obviously, I think Europe got it right in many aspects of that. And I think they're doing the same thing. And honestly, I feel like the EU AI Act is a little less onerous than GDPR was. Or maybe we've just all been conditioned to it now.
[00:13:37] But I do feel like it's trying to strike that balance between I don't want unregulated, uncontrolled AI in making healthcare decisions or financial decisions or controlling airplanes without some level of control and moderation and transparency. But that doesn't mean that me using AI to help write better emails needs to necessarily have the same level of rigor and requirements behind it. So I think it's back to just – and I feel like a broken record.
[00:14:05] But it's just back to maintaining that balance between how and why something's being used versus protecting the people that may potentially be impacted by it. Yeah, I completely agree. And kudos for highlighting one of my biggest bugbears there, the GDPR situation. And in many ways, it has just added multiple clicks or messages if you're not authorized to see this content in your country, which just creates a frustrating experience rather than a better experience, I would argue.
[00:14:35] But let's spare a thought for a moment for all the CIOs out there listening that are trying to navigate this complex area and also create that right balance that we're talking about today. So for that CIO that could be listening, what kind of frameworks or controls would you like to see being built into future AI policies to support enterprise adoption rather than frighten enterprises off?
[00:15:01] Yeah, I'll go with the first because this is something we're pursuing as well. I think ISO does it well in most situations. We've been an ISO 27001 shop for quite a while. We've been an ISO 27701 shop. And now we're actually undergoing the ISO 402001 AI governance certification as well. So ISO is good. It's a worldwide standard. It's looked at.
[00:15:26] But I think also any sort of CIO or any technology person looking to put a framework in place, I think first and foremost, it has to be risk-based. I think you need to be able to categorize the use of AI within your organization as low, medium, or high risk. And obviously the lower risk of what it's doing, probably the fewer controls, the fewer governance, the higher risk, obviously more.
[00:15:52] Understanding your data controls, understanding data lineage, understanding audit trails, preserving access controls and things like that so that you don't get data spillage into other areas or someone that shouldn't see particular data is able to see it. And then really just making sure you're understanding the model and output validation, making sure you're understanding that it's producing exactly what you intended to produce. And really those are kind of the things.
[00:16:20] And then finally, and I mentioned this earlier, it's having kill switches, having humans in the loop, having the ability to override AI when AI does get it wrong. AI is amazing. It does so many just truly remarkable things and it can really transform your business. But that doesn't mean it's perfect. Is it better today than it was a year ago? Absolutely, but it's still not perfect.
[00:16:41] And so having the ability for a human to override it, having the balance of transparency, obviously a lot of solutions, it's a little bit of the secret sauce and how they do things. So you don't want to lose intellectual property, but having a level of transparency into how and why the AI system is making decisions is also important.
[00:17:05] And abnormal security, you guys work at the intersection of AI and cybersecurity, and you must be having so many different conversations with clients around the world and monitoring some of the big tech trends out there. So I'm curious, what are you seeing are the real world trends that reinforce the urgency of smarter AI governance? Are you seeing anything around this?
[00:17:28] Yeah, security has always been a cat and mouse game where oftentimes it feels like the attacker has an advantage because they don't have to follow ethical standards. They're not operating with morality and they just have to be right one time, whereas as the defender, you have to be right 100% of the time.
[00:17:49] And you look at all these cool consumer uses of AI, you look at things like chat GPT, you look at the ability to write flawless emails, you look at the ability to be able to go out and analyze data and contextualize data at machine speed. And all of those tools and technologies are in the hands of attackers.
[00:18:11] And, you know, I joke that the difference between the marketing person in a company that's using an AI tool to write better emails to get potential business prospects to click on them and open them up and respond versus the cyber criminal that's sending business email compromise emails out. The only difference is their ethics and their end goal is essentially the same. They want engagement. They want people to reply.
[00:18:38] Just one's operating as a legitimate business and the other is operating to steal or create havoc. And part of that is we've got to be able to equip security teams with the necessary AI capabilities to fight fire with fire, fight the bad AI with good AI. And it's hard because if you go to the big trade shows like RSA or Black Hat, every single vendor is now claiming to be some level of AI vendor.
[00:19:07] So that makes it difficult for CIOs and CISOs to really sort out fact and fiction. Is this solution really leveraging AI? Is this solution really going to use AI to add a significant return on investment? Or is it window dressing to keep up with the Gartner hype?
[00:19:27] And if we were to look ahead into my virtual crystal ball here, if you could manifest the ideal global model for AI regulation, let's put some positivity back out there. What would that model look like to you? And who needs to take the first steps to make it happen? Because hopefully between us, we can put that ideal model into the universe and make something happen today. But what would it look like and what needs to happen? Yeah, I think ideally it needs to be based on the risk of the AI system.
[00:19:57] I'm a big believer in ISO as kind of an ideal model for controls and a control framework. And really any regulation should be pushing to some sort of standardized framework. If you patch together a bunch of arbitrary rules and requirements, it makes it more difficult. If you attach it to some sort of framework that people can get behind that is less descriptive, more prescriptive, I think that's a smarter way to do it.
[00:20:26] I think you've just got to balance the rigor. Can a large company like Google, can they afford all of the overhead of bureaucratic controls and the extra work that's required? Sure. But can a young startup that may have a truly innovative use for AI, can they necessarily take on that same level of rigor that may be required in some of these regulations? I think that's where we've just got to balance.
[00:20:51] Because what you end up in, the reality of it is you end up with the larger companies that oftentimes tend to be the biggest violators of personal data and use case. They tend to be the ones with the biggest pocketbooks that can adhere to the regulations and the smaller players that may truly have game-changing innovation and game-changing uses of AI for good. They often get swept under the rug because they can't compete with the bigger tech companies.
[00:21:20] When we're talking about building that ideal model and keeping up to speed with the pace of technological change, I think it's incredibly easy to feel overwhelmed of being in this state of continuous learning. So as somebody that's almost tasked with being tasked with leading the way in this industry, where or how do you self-educate? How do you keep up to speed with this pace of change? Any tips you could offer anyone listening there?
[00:21:48] Yeah, I've been in security and technology for almost 30 years. And I think part of why I've stayed in this space for so long and what drew me to coming to a company like Abnormal is I'm an avid learner. I love to learn new things. I love to be very tactile. I love to get my hands on things. I love to play around with things.
[00:22:10] And I honestly feel like the best way to learn it and understand it is to use it. And there's great resources. I mean, when I started my career in the mid-90s, whenever the internet was just coming online, I didn't have this wealth of resources.
[00:22:29] Now, whether it's YouTube or Reddit or just a thousand and one different blogs out there, there's so many places to really see what people are using and how they're using it. And then just be able to take that and make it your own and test it and play with it. And I continue to be amazed by the just – and part of it is just you don't realize the power and use cases for some of these things unless you just play around with it.
[00:22:57] And I think ultimately it's kind of like the non-malicious definition of a hacker. It's just someone that plays with things, figures out how it works, figures out what it's supposed to do and not supposed to do, and just has that intellectual curiosity to keep digging. And I think that's one thing that you can't necessarily teach somebody, but if you have that mindset, there's so many resources out there. I also find Twitter slash X is a great resource to find good use cases, to find what people are saying, what's going on.
[00:23:27] It really is. Moore's Law was kind of the de facto technology innovation formula. And I feel like over the last two years, Moore's Law has been thrown out the window. And now we're in the age of AI, and it's even a much more breakneck pace of innovation. And I think we last spoke – it was way back in 2022 – and you said at the very beginning of the podcast you were doing AI before it was cool.
[00:23:52] And in 2022, I think we were talking about cloud email security and your behavioral AI platform then. So anyone that missed that conversation, I urge them to go listen to that. I think it was episode 2254 for anyone that wants to check that out. And anyone who wants to think about it a little bit deeper or stay up to speed with the kind of work that you're doing here, where would you like to point everyone? Yeah, first of all, I'd love to point everyone straight to our website, abnormal.ai. I'm also happy to connect with people on LinkedIn.
[00:24:21] I'm easily found there. Or I can also be reached through email by just mike at abnormalai. Awesome. Well, I'll add links to everything there. And we covered a lot today from why AI regulation should focus on context, the advantages of regulatory sandboxing, and the importance of US alignment with UK and European standards. So many big takeaways. Look to hear what people listening think about everything today. And more than anything, thank you for starting the conversation today, Mike.
[00:24:50] And we mustn't leave it three years before talking again. But love catching up with you today. Absolutely. Thanks for having me, Neil. So a big thank you to Mike there for joining me today and sharing such a measured, thoughtful perspective on AI regulation and its intersection with cybersecurity. And I think what I appreciated most was his grounded approach, one that acknowledges the real risks while still championing innovation.
[00:25:16] And it's clear that we need smarter, more consistent regulation. But the key isn't just model size or data volume. I think it's context, it's application and the level of risk involved. And Mike made a compelling case today for why transparency and human oversight should be non-negotiables. And why a global risk-based framework should be the path forward if we want to stay ahead of both adversaries and economic competitors.
[00:25:46] And as AI continues to accelerate faster than Moore's law ever did, the burden is on each and every one of us, whether we're in tech, regulation, business, leadership, to stay informed. Be a part of shaping how this ultimately unfolds. So let me know your thoughts on anything we discussed today. Techblogwriteroutlook.com, LinkedIn, X, Instagram, just at Neil C. Hughes.
[00:26:12] And remember, if you're based in San Diego, a Cisco Live heads your way in June and you're a listener to this podcast. Feel free to slide into my DMs. I'd love to meet up with you when I'm on the road. But for now, what are your thoughts on the future of AI regulation? Are we striking that right balance between progress and protection? Let me know. Love to hear your views. But that is it for today. So I'll be back again soon with another great guest. But thank you for listening today. Bye for now.

