In this episode of the Tech Talks Daily Podcast, we're exploring the complex, high-stakes world of AI governance, risk management, and data security with Jack Berkowitz, Chief Data Officer at Securiti AI. Jack brings over 30 years of experience in the data field, having worked with top Fortune 500 companies like ADP and Oracle. Today, he's leading innovation at Securiti AI, a company that has become synonymous with secure data and AI systems, and is recognized for its Data+AI Command Center.
The conversation begins with the critical challenges facing today's C-Suite. Executives are tasked with navigating the risks and opportunities presented by artificial intelligence while ensuring robust security frameworks are in place. Jack will discuss how CISOs and other executives can strike the right balance between leveraging AI for innovation and safeguarding against ever-evolving cyber threats like data poisoning and model theft. He will explain the steps that companies should take to secure their AI and data ecosystems, focusing on a framework that's not just reactive but proactive in identifying risks before they become costly problems.
Listeners will learn about proven strategies for implementing AI governance, as Jack shares real-world examples from his discussions with clients. He'll also provide insights into how Securiti AI's platform is giving companies visibility into their data flows and equipping security teams with tools to manage compliance, privacy, and security across systems like SharePoint and Office 365.
Are traditional security frameworks adequate in this fast-paced AI era? What can CISOs do to ensure their AI systems remain transparent, trusted, and secure, all while complying with an increasingly complex web of regulations? Join us as Jack Berkowitz answers these questions.
[00:00:04] [SPEAKER_00]: How can C-suite executives effectively balance the push for innovation with the critical
[00:00:10] [SPEAKER_00]: need for security and compliance?
[00:00:13] [SPEAKER_00]: Especially when you throw in the fact we're in an AI driven world right now.
[00:00:18] [SPEAKER_00]: Well today Jack Berkowitz, Chief Data Officer at Securiti AI is going to explore this complex
[00:00:25] [SPEAKER_00]: question and challenge with us today.
[00:00:28] [SPEAKER_00]: Jack brings with him over 30 years of experience guiding Fortune 500 companies through the
[00:00:33] [SPEAKER_00]: intricacies of AI governance and data security, and with data transforming industries and
[00:00:39] [SPEAKER_00]: regulations tightening, how can organisations stay ahead while ensuring their data is secure
[00:00:46] [SPEAKER_00]: and compliant?
[00:00:48] [SPEAKER_00]: Well today we're going to dive into the proven strategies for AI risk management, the role
[00:00:54] [SPEAKER_00]: of Securiti AI's Data plus AI Command Centre, and how transparency can build trust in AI
[00:01:03] [SPEAKER_00]: systems.
[00:01:04] [SPEAKER_00]: Are you or your business thinking of launching a podcast in 2025?
[00:01:09] [SPEAKER_00]: Well dreams are about to become a reality because I have three amazing options that will guarantee
[00:01:15] [SPEAKER_00]: that you can make that happen.
[00:01:17] [SPEAKER_00]: Option one, you record and I handle the rest.
[00:01:20] [SPEAKER_00]: All you need is 30 minutes to go and record an episode, I'll take care of editing, setting
[00:01:26] [SPEAKER_00]: up the RSS feed and publishing it across all major podcasting platforms.
[00:01:30] [SPEAKER_00]: For just $2,000 you can work alongside me, I'll have you up and running with a season
[00:01:36] [SPEAKER_00]: of 12 polished episodes ready to go live.
[00:01:39] [SPEAKER_00]: So if you or your business are interested in getting a podcast live and you want to
[00:01:43] [SPEAKER_00]: work directly with me, visit my website techblogwriter.co.uk, email me techblogwriter at outlook.com, send
[00:01:51] [SPEAKER_00]: me a DM on any social platform such as LinkedIn, X or Instagram, I'm just at Neil C Hughes
[00:01:57] [SPEAKER_00]: on every single platform.
[00:01:59] [SPEAKER_00]: Option two, if you're looking for extended end to end support, well I've partnered with
[00:02:04] [SPEAKER_00]: AIpodcast.ing, they provide everything from website management to promotional material
[00:02:12] [SPEAKER_00]: and help improving engagement and you can also get 25% off your first month with the
[00:02:18] [SPEAKER_00]: code NeilCHughes and let them handle all the heavy lifting for you.
[00:02:23] [SPEAKER_00]: But option three, if you want to do it all yourself, LipSyn, the podcast host that I
[00:02:28] [SPEAKER_00]: use for this show, they are offering up to two months of free hosting when you use the
[00:02:32] [SPEAKER_00]: promo code TBW.
[00:02:34] [SPEAKER_00]: For example, if you were to sign up on November the 1st, you can start 2025 strong and without
[00:02:40] [SPEAKER_00]: paying anything till January the 1st, 2025.
[00:02:44] [SPEAKER_00]: Whichever path you choose, I'm here to make sure that you start the new year with a podcast
[00:02:49] [SPEAKER_00]: ready to take the world by storm.
[00:02:51] [SPEAKER_00]: Let's do it together.
[00:02:52] [SPEAKER_00]: But enough from me, let's get the guest on now.
[00:02:55] [SPEAKER_00]: So a massive warm welcome to the show, Jack.
[00:02:58] [SPEAKER_00]: Can you tell everyone listening a little about who you are and what you do?
[00:03:03] [SPEAKER_01]: Sure.
[00:03:03] [SPEAKER_01]: My name is Jack Berkowitz.
[00:03:04] [SPEAKER_01]: I'm the chief data officer of a startup company, later stage startup company called Security.AI.
[00:03:11] [SPEAKER_01]: And we work with businesses that are trying to deploy data systems and AI systems in a
[00:03:19] [SPEAKER_01]: secure way.
[00:03:20] [SPEAKER_01]: My entire job is actually a little bit different.
[00:03:24] [SPEAKER_01]: I'm working with the C levels of Fortune 500 companies having been at the C level previously
[00:03:31] [SPEAKER_01]: in a couple of big companies.
[00:03:34] [SPEAKER_00]: Well, it's a pleasure to have you on the podcast.
[00:03:36] [SPEAKER_00]: I'm looking forward to find out more information about the kind of things that C-suite are
[00:03:40] [SPEAKER_00]: talking about because there's so much hype around AI this week alone that we're recording
[00:03:44] [SPEAKER_00]: this podcast.
[00:03:45] [SPEAKER_00]: It feels like, yes, there's guys like me and you that talk about it all day long, but it
[00:03:49] [SPEAKER_00]: feels like it's entered the mainstream this week with the big announcements with Apple
[00:03:53] [SPEAKER_00]: and Apple Intelligence.
[00:03:54] [SPEAKER_00]: There was the Oprah interview last night, which seems to be put in front of everybody right
[00:04:00] [SPEAKER_00]: now.
[00:04:00] [SPEAKER_00]: But with your extensive experience, how do you see the role of the C-suite evolving and
[00:04:06] [SPEAKER_00]: balancing and the opportunities of AI innovation with the growing complexities of security
[00:04:12] [SPEAKER_00]: and compliance?
[00:04:13] [SPEAKER_00]: It's a real mixed bag and there's so much going on, but what are you seeing here?
[00:04:17] [SPEAKER_01]: Well, it's really that balance, right?
[00:04:20] [SPEAKER_01]: When you're sitting there and you're talking about strategy, not strategy over six months,
[00:04:24] [SPEAKER_01]: but strategy over three to five years, every C-suite's looking at the business advantage
[00:04:30] [SPEAKER_01]: or the business disruption that these technologies are going to have.
[00:04:33] [SPEAKER_01]: If you look at just the reasoning stuff that OpenAI released yesterday, that disrupts huge
[00:04:42] [SPEAKER_01]: industries around SaaS applications or any sort of services type work or knowledge-based
[00:04:50] [SPEAKER_01]: work.
[00:04:50] [SPEAKER_01]: At the same time, everybody's worried about, well, wait a second, this is about intellectual
[00:04:56] [SPEAKER_01]: property, but it also may be about information about our employees, information about our
[00:05:01] [SPEAKER_01]: relationships with our customers, information about our earnings release and how do we make
[00:05:08] [SPEAKER_01]: sure that we can operate as a company or as an organization, respecting both the regulations,
[00:05:16] [SPEAKER_01]: but also just respecting the people we work with.
[00:05:20] [SPEAKER_01]: While at the same time, getting ourselves in a position to react to these pretty fundamental
[00:05:27] [SPEAKER_00]: changes.
[00:05:28] [SPEAKER_00]: Yeah, there's so much happening around this at the moment.
[00:05:30] [SPEAKER_00]: I think it was in February of this year, I spoke with your CEO, Rehan.
[00:05:35] [SPEAKER_00]: I think we were talking about securing enterprise data in this generative AI landscape that we
[00:05:42] [SPEAKER_00]: find ourselves.
[00:05:43] [SPEAKER_00]: But at SecureAI, you are leading efforts in data security and governance.
[00:05:48] [SPEAKER_00]: I remember from our last conversation, but can you just explain how data and AI command
[00:05:53] [SPEAKER_00]: center that you've got there, how that is helping organizations securely manage their AI and
[00:06:00] [SPEAKER_00]: their data?
[00:06:01] [SPEAKER_00]: And the reason I ask that, I think there's so much hype at the moment.
[00:06:03] [SPEAKER_00]: A lot of businesses looking about what they're going to do next, how they're going to do
[00:06:06] [SPEAKER_00]: this.
[00:06:06] [SPEAKER_00]: They need to do something, but knowing how to do it effectively, that's where things
[00:06:10] [SPEAKER_00]: often get a little bit complex.
[00:06:13] [SPEAKER_00]: So can you expand on that data plus AI command center that you've got there?
[00:06:17] [SPEAKER_01]: Yeah.
[00:06:18] [SPEAKER_01]: So I was chief data officer of a Fortune 500 company prior to joining it.
[00:06:24] [SPEAKER_01]: Our biggest issue wasn't as much knowing how to build an AI system.
[00:06:28] [SPEAKER_01]: I had a bunch of data scientists, we could go do that.
[00:06:31] [SPEAKER_01]: But it was really understanding all the data assets we had, where they happened to be,
[00:06:36] [SPEAKER_01]: how they were flowing through our systems.
[00:06:40] [SPEAKER_01]: And once we had that knowledge, well, wait a second, what about all this unstructured
[00:06:45] [SPEAKER_01]: data that the generative AI world is all about?
[00:06:48] [SPEAKER_01]: Where is it?
[00:06:49] [SPEAKER_01]: Have we been managing it?
[00:06:51] [SPEAKER_01]: Do we even know who's updated it and is it current?
[00:06:56] [SPEAKER_01]: And so the data command center idea is to provide a view of this information and then
[00:07:02] [SPEAKER_01]: put controls in place that can alert you.
[00:07:04] [SPEAKER_01]: So it can alert you if data has moved from the UK to the US inadvertently.
[00:07:10] [SPEAKER_01]: It can alert you if you violated a regulation.
[00:07:18] [SPEAKER_01]: It can alert you or it can actually help you take action to make sure...
[00:07:23] [SPEAKER_01]: One of the big things right now is around data access governance around SharePoint or
[00:07:28] [SPEAKER_01]: Office 365.
[00:07:30] [SPEAKER_01]: Why?
[00:07:30] [SPEAKER_01]: Because people want to use Copilot and it's great.
[00:07:33] [SPEAKER_01]: If everybody had a clean setup of all of their folders in Office 365, it would be great.
[00:07:41] [SPEAKER_01]: But as you know, people don't have clean setups.
[00:07:45] [SPEAKER_01]: Everybody gives access to everything and information can leak.
[00:07:51] [SPEAKER_01]: And so the data command center is really that notion of providing the tools that security
[00:07:59] [SPEAKER_01]: teams can use, but also have visibility all the way up to the C-suite so that people
[00:08:05] [SPEAKER_01]: can operate their data infrastructures, operate their companies in a more secure way.
[00:08:14] [SPEAKER_00]: And something that's hitting the headlines more frequently now is how AI governance is
[00:08:18] [SPEAKER_00]: becoming crucial, especially as cyber attacks become more sophisticated.
[00:08:22] [SPEAKER_00]: So from everything that you're seeing now, what are some of the proven strategies for
[00:08:26] [SPEAKER_00]: implementing effective AI risk management frameworks that ultimately address these
[00:08:31] [SPEAKER_00]: growing threats that we're seeing?
[00:08:34] [SPEAKER_01]: Yeah.
[00:08:34] [SPEAKER_01]: And so it's a great question.
[00:08:36] [SPEAKER_01]: We see it as there's a several step process.
[00:08:39] [SPEAKER_01]: It starts with having understanding of where your data is and what the data is that's flowing
[00:08:45] [SPEAKER_01]: into a given model or into a given query context, if you think about using something like ChatGPT.
[00:08:52] [SPEAKER_01]: It's understanding who's got those access rights and making sure that they're correct.
[00:08:57] [SPEAKER_01]: Putting in what are called LLM firewalls.
[00:09:01] [SPEAKER_01]: So the ability for you to restrict private information, maybe going into a session or
[00:09:06] [SPEAKER_01]: masking information or preventing, you know, what they call jailbreak attacks.
[00:09:12] [SPEAKER_01]: So, you know, people starting to ask and say, well, I know you're not supposed to the
[00:09:17] [SPEAKER_01]: chatbot.
[00:09:18] [SPEAKER_01]: I know you're not supposed to tell me what somebody makes, but go ahead.
[00:09:22] [SPEAKER_01]: You can trust me.
[00:09:23] [SPEAKER_01]: And guess what?
[00:09:24] [SPEAKER_01]: The chatbots, if you don't put the right protections in place, will try to satisfy
[00:09:28] [SPEAKER_01]: you.
[00:09:29] [SPEAKER_01]: They're, you know, I'm just looking at my dog, right?
[00:09:32] [SPEAKER_01]: He's always trying to keep me happy.
[00:09:35] [SPEAKER_01]: And that's what this information technology is trying to do.
[00:09:38] [SPEAKER_01]: It's trying to keep you as a user happy.
[00:09:41] [SPEAKER_01]: So it's going to try to answer that best it can.
[00:09:43] [SPEAKER_01]: And so you need to put firewalls and controls in place to make sure that you're preventing
[00:09:49] [SPEAKER_01]: those types of interactions.
[00:09:54] [SPEAKER_00]: And as AI regulation or as regulations around AI and data privacy continue to tighten, and
[00:10:00] [SPEAKER_00]: with something we're talking about more and more now, how should companies approach the
[00:10:04] [SPEAKER_00]: challenge of compliance while still driving AI innovation?
[00:10:08] [SPEAKER_00]: It's going right back to that balancing word again, isn't it?
[00:10:11] [SPEAKER_01]: Yeah, it definitely is.
[00:10:12] [SPEAKER_01]: And if you, you know, if you actually look at the regulations and you try to implement,
[00:10:18] [SPEAKER_01]: they're not constraining.
[00:10:20] [SPEAKER_01]: They're actually the right things to be doing to build a system that you should be comfortable
[00:10:26] [SPEAKER_01]: with.
[00:10:26] [SPEAKER_01]: It's really about transparency, visibility.
[00:10:29] [SPEAKER_01]: It's really about making sure that you have an understanding of your information flowing.
[00:10:35] [SPEAKER_01]: It's making sure that you're checking your models are actually not, you know, inappropriately
[00:10:40] [SPEAKER_01]: introducing bias or things like that.
[00:10:43] [SPEAKER_01]: So the regulations, if you do it correctly, if you lay down the right foundations, you're
[00:10:48] [SPEAKER_01]: actually creating a springboard for your business advantage by complying with the regulations.
[00:10:54] [SPEAKER_01]: It's not that you, you know, yeah, you're going to avoid fines, but actually you're
[00:10:59] [SPEAKER_01]: going to put yourself in a better situation to, you know, getting back to the original
[00:11:04] [SPEAKER_01]: thing what the C-suite wants to do, right?
[00:11:06] [SPEAKER_01]: Which is take advantage of this technology.
[00:11:10] [SPEAKER_01]: So my thing would be is, you know, go ahead and look at a system, whether it's from security,
[00:11:17] [SPEAKER_01]: where I work or some of the other companies and start to build out those capabilities
[00:11:24] [SPEAKER_01]: and also the people that you're going to need to operate and maintain these systems and
[00:11:29] [SPEAKER_01]: go ahead and do it in an aggressive way because it actually will get you in a good position
[00:11:35] [SPEAKER_01]: for what you're going to be in the future.
[00:11:38] [SPEAKER_00]: And after doing a little research on you, I know you've spoken with numerous clients
[00:11:42] [SPEAKER_00]: about advancing AI and data implementations, but are there any real world examples of how
[00:11:49] [SPEAKER_00]: companies are enhancing their security through strategic AI governance that you're able to
[00:11:54] [SPEAKER_00]: share?
[00:11:54] [SPEAKER_00]: I appreciate you probably can't mention any names, but just to bring to life what we're
[00:11:58] [SPEAKER_00]: talking about, are there any examples you can share?
[00:12:00] [SPEAKER_01]: Yeah, I mean, I can't share specifics on a customer.
[00:12:04] [SPEAKER_01]: Yeah.
[00:12:05] [SPEAKER_01]: But, you know, I've been working on this for most of my career.
[00:12:08] [SPEAKER_01]: You know, I was looking in the mirror this morning and the beard is actually gray.
[00:12:15] [SPEAKER_01]: And you see these examples where, you know, by putting in the right security measures,
[00:12:21] [SPEAKER_01]: you're able to build customer support agents that, you know, drastically reduce the time
[00:12:29] [SPEAKER_01]: it takes to support a caller or move it from, you know, pure voice interaction onto, you
[00:12:37] [SPEAKER_01]: know, a chatbot.
[00:12:39] [SPEAKER_01]: And you see then increases in customer satisfaction.
[00:12:44] [SPEAKER_01]: You see reductions in churn.
[00:12:46] [SPEAKER_01]: And why is that?
[00:12:47] [SPEAKER_01]: Well, people put in the right security.
[00:12:49] [SPEAKER_01]: So the companies are competent to provide that capability and the customer, because,
[00:12:55] [SPEAKER_01]: you know, maybe falsely, but normally through building up trust over time, trust that system
[00:13:04] [SPEAKER_01]: to be used.
[00:13:05] [SPEAKER_01]: And it's really about trust, right?
[00:13:07] [SPEAKER_01]: If the customer starts asking things and all of a sudden their national identifier, their
[00:13:14] [SPEAKER_01]: – I used – when I lived in the UK, I had my national insurance number, right?
[00:13:20] [SPEAKER_01]: Or here in the US, your social security number.
[00:13:22] [SPEAKER_01]: If those numbers start showing up, then consumers or customers aren't going to trust, right?
[00:13:28] [SPEAKER_01]: But by masking that stuff, making sure that it doesn't leak out, making sure it doesn't
[00:13:33] [SPEAKER_01]: show up in the chatbot itself, people will build that trust and will start to use it.
[00:13:37] [SPEAKER_01]: And you start to see that advantage.
[00:13:39] [SPEAKER_01]: And that saves companies, you know, millions of dollars in costs, but also, you know, think
[00:13:45] [SPEAKER_01]: about it, by retaining your customers and keeping them happy, you know, massive acceleration
[00:13:51] [SPEAKER_01]: to your business.
[00:13:53] [SPEAKER_00]: And cyber security threats like data poisoning and model theft are also on the rise, is it?
[00:13:59] [SPEAKER_00]: Any advice you might have for any CISOs that could be listening who are re-evaluating their
[00:14:04] [SPEAKER_00]: security frameworks right now and trying to counter these evolving threats?
[00:14:08] [SPEAKER_00]: Any advice for those people listening?
[00:14:10] [SPEAKER_01]: Yeah, definitely.
[00:14:12] [SPEAKER_01]: So these systems don't work in isolation.
[00:14:14] [SPEAKER_01]: You can't just say, well, I've got a couple of smart data scientists, so you're going
[00:14:19] [SPEAKER_01]: to go work on that in the corner.
[00:14:21] [SPEAKER_01]: You know, first of all, you know, we're all familiar in the security world about data
[00:14:27] [SPEAKER_01]: poisoning attacks, but also, you know, just software supply chain attacks.
[00:14:33] [SPEAKER_01]: These things can happen in these models, right?
[00:14:36] [SPEAKER_01]: And so you need to have a system that's looking at it end to end from the data origination,
[00:14:42] [SPEAKER_01]: whether you're training the model or just using it for inference.
[00:14:44] [SPEAKER_01]: And most people aren't going to be training models.
[00:14:46] [SPEAKER_01]: Most people are just going to be using existing models for inference.
[00:14:50] [SPEAKER_01]: But you need to understand those.
[00:14:52] [SPEAKER_01]: You need to understand your data flowing in.
[00:14:54] [SPEAKER_01]: You need to understand how the firewalls are set up.
[00:14:57] [SPEAKER_01]: You need to understand it, whether or not your team's building it or you're using it
[00:15:02] [SPEAKER_01]: in a SaaS application.
[00:15:05] [SPEAKER_01]: You know, something like you're using in Salesforce or you're using in Workday or whatever it happens
[00:15:09] [SPEAKER_01]: to be.
[00:15:10] [SPEAKER_01]: And so I think, you know, an awful lot of CISOs that I've spoken to at the C level,
[00:15:19] [SPEAKER_01]: they understand this.
[00:15:20] [SPEAKER_01]: But when you go a level or two down in the organization, people are, because they don't
[00:15:24] [SPEAKER_01]: know the technology, say, oh, well, we have a data science team doing that.
[00:15:28] [SPEAKER_01]: And they'll worry about it.
[00:15:29] [SPEAKER_01]: And I think you really need to motivate your team and say, no, this is an us problem.
[00:15:36] [SPEAKER_01]: This is a we problem.
[00:15:37] [SPEAKER_01]: And all of us need to be working on this together as a company or as an organization to make
[00:15:44] [SPEAKER_01]: sure that we're operating in the most secure ways.
[00:15:47] [SPEAKER_00]: And from the outside, looking at your career, you've helped deploy AI and ML systems across
[00:15:52] [SPEAKER_00]: so many Fortune 500 companies.
[00:15:55] [SPEAKER_00]: I've got to ask, how do you ensure that AI systems are both secure and effective at scale,
[00:16:01] [SPEAKER_00]: particularly in industries facing high and regulatory scrutiny?
[00:16:05] [SPEAKER_00]: Because I hear a lot about companies that are struggling with this at the moment.
[00:16:08] [SPEAKER_00]: And if you can share around this?
[00:16:11] [SPEAKER_01]: Yeah.
[00:16:11] [SPEAKER_01]: So I always start, and the reason why I'm so excited to be at security is I always start
[00:16:18] [SPEAKER_01]: with this idea that we need to have observability and measurement.
[00:16:22] [SPEAKER_01]: You know, even, I was just relaying yesterday, so I spent part of my career at Oracle getting,
[00:16:28] [SPEAKER_01]: helping to get that, their first AI and ML projects off the ground with customers in
[00:16:35] [SPEAKER_01]: their business applications, like in ERP and HCM.
[00:16:38] [SPEAKER_01]: The first day we went out, everything was measured.
[00:16:43] [SPEAKER_01]: Everything about data access, model drift, everything was being measured, the first customer
[00:16:50] [SPEAKER_01]: to touch it.
[00:16:51] [SPEAKER_01]: It wasn't an afterthought.
[00:16:53] [SPEAKER_01]: And so not only were we confident in the business results that we were providing to
[00:16:59] [SPEAKER_01]: our customers, but we were confident in the security and the data flows and the visibility
[00:17:05] [SPEAKER_01]: throughout that system.
[00:17:06] [SPEAKER_01]: It took us years to build that out.
[00:17:09] [SPEAKER_01]: Today, you can take advantage of all those learnings from companies like ours, where
[00:17:17] [SPEAKER_01]: all of that's available, and you can just implement it and get going very quickly.
[00:17:21] [SPEAKER_01]: But to me, it's really about understanding and measuring proactively and not reactively.
[00:17:29] [SPEAKER_01]: It's being proactive and taking a stance and moving forward aggressively as opposed to
[00:17:36] [SPEAKER_01]: just letting it build up.
[00:17:38] [SPEAKER_01]: It's engineering debt or technical debt.
[00:17:41] [SPEAKER_01]: Letting that build up will just never get you to the other side.
[00:17:46] [SPEAKER_01]: So really, I spend an awful lot of time asking this one question, whether I'm at a board
[00:17:52] [SPEAKER_01]: of directors of a Fortune 500, whether I'm with the C-suite or whether I'm down inside
[00:17:58] [SPEAKER_01]: working with day-to-day hands-on keyboard people.
[00:18:04] [SPEAKER_01]: What's the definition of good?
[00:18:06] [SPEAKER_01]: Let's just get down to the one thing.
[00:18:09] [SPEAKER_01]: What would you consider to be good?
[00:18:12] [SPEAKER_01]: Good in terms of your business, good in terms of security.
[00:18:14] [SPEAKER_01]: If we can get agreement on that, then everything else just falls in line.
[00:18:19] [SPEAKER_01]: And I think that sort of stepping back, getting to the whiteboard and having that discussion
[00:18:25] [SPEAKER_01]: is the most important way to have both a successful program and a secure program.
[00:18:31] [SPEAKER_00]: And if we dare to look ahead, any steps that organizations should be taking to operationalize
[00:18:37] [SPEAKER_00]: AI transparency, again, another word we're hearing a lot about, but also trust so we
[00:18:43] [SPEAKER_00]: can improve adoption and achieve those long-term business goals, especially in rapidly changing
[00:18:49] [SPEAKER_00]: digital landscape that we're seeing so many big changes right now.
[00:18:52] [SPEAKER_00]: And everyone's rushing forward with AI.
[00:18:54] [SPEAKER_00]: It is a big, exciting technology, but it's all about business value and achieving business
[00:18:59] [SPEAKER_00]: goals.
[00:18:59] [SPEAKER_00]: That's what we need to get back to.
[00:19:01] [SPEAKER_00]: So any advice here?
[00:19:04] [SPEAKER_00]: Yeah.
[00:19:04] [SPEAKER_01]: I think one of the things that's very practical item is write down your principles.
[00:19:12] [SPEAKER_01]: Write down how you intend to operate, how you intend to introduce AI and put it out
[00:19:21] [SPEAKER_01]: on your website or put it out in your materials so everybody can see it, whether it's your
[00:19:27] [SPEAKER_01]: employees, whether it's your customers, whether if you're regulated, it's your regulators.
[00:19:32] [SPEAKER_01]: Write these things down.
[00:19:33] [SPEAKER_01]: I was at ADP and in 2019, we wrote out our data and AI governance and security principles
[00:19:44] [SPEAKER_01]: and published that.
[00:19:45] [SPEAKER_01]: There were only five principles.
[00:19:47] [SPEAKER_01]: I think now it's seven principles.
[00:19:50] [SPEAKER_01]: We wrote it out and we said, hey, this is how we're going to operate as a company.
[00:19:55] [SPEAKER_01]: That transparency then drives a lot of decisions.
[00:19:58] [SPEAKER_01]: There's not discussions in the hallway or anything else.
[00:20:02] [SPEAKER_01]: You're seeing that level of transparency today even now with the AI companies.
[00:20:07] [SPEAKER_01]: And so just this week or maybe it was last week now, Microsoft wrote a transparency note
[00:20:13] [SPEAKER_01]: about Copilot.
[00:20:15] [SPEAKER_01]: And I've been cited in a couple of articles in the register about problems people are
[00:20:20] [SPEAKER_01]: seeing with Copilot.
[00:20:21] [SPEAKER_01]: But Microsoft said, hey, here's our transparency note.
[00:20:25] [SPEAKER_01]: Yesterday, OpenAI said, hey, here's how we test our systems.
[00:20:29] [SPEAKER_01]: Here's our model card for OpenAI, for GPT.
[00:20:35] [SPEAKER_01]: Companies can do the same.
[00:20:36] [SPEAKER_01]: They're not publishing model cards or transparency notes about the technology, but they're saying,
[00:20:41] [SPEAKER_01]: hey, here's how we're using the technology.
[00:20:43] [SPEAKER_01]: Here's how we're monitoring it.
[00:20:45] [SPEAKER_01]: Here's how we're going to operate.
[00:20:48] [SPEAKER_01]: And I think simply by writing it down and being transparent about that, then it has
[00:20:55] [SPEAKER_01]: positive implications, positive effects throughout then the operations of all these systems.
[00:21:05] [SPEAKER_00]: And I'm very conscious that throughout our conversations, they would be very forward-looking
[00:21:09] [SPEAKER_00]: and giving valuable takeaways for people listening.
[00:21:13] [SPEAKER_00]: And you've had a fantastic career.
[00:21:14] [SPEAKER_00]: But of course, none of us are able to achieve any degree of success without a little help
[00:21:18] [SPEAKER_00]: along the way.
[00:21:19] [SPEAKER_00]: Very often, there's someone that we could be grateful towards who maybe saw something
[00:21:23] [SPEAKER_00]: in us or invested a little time that just helped us get us where we are today.
[00:21:27] [SPEAKER_00]: Who would that person be for you or people?
[00:21:30] [SPEAKER_00]: And maybe we'll give them a little shout out to them.
[00:21:33] [SPEAKER_01]: Yeah, let me give you two people.
[00:21:35] [SPEAKER_01]: And I don't know if they're hearing this or not, but I'll just mention their names.
[00:21:40] [SPEAKER_01]: So when I was in graduate school way back in 1990, I had an offer to go working at an
[00:21:49] [SPEAKER_01]: aircraft company.
[00:21:51] [SPEAKER_01]: And I had a professor, a guy named Paul Kimerling, take me aside and say, you don't want to go
[00:21:57] [SPEAKER_01]: do that because after 30 years, you're going to be in charge of the left lug nuts on the
[00:22:02] [SPEAKER_01]: tires.
[00:22:02] [SPEAKER_01]: And there'll be somebody else in charge of the right lug nut.
[00:22:06] [SPEAKER_01]: Go explore the world.
[00:22:07] [SPEAKER_01]: Go take many jobs.
[00:22:10] [SPEAKER_01]: Go be intellectually curious.
[00:22:12] [SPEAKER_01]: Go try things.
[00:22:16] [SPEAKER_01]: Go wander through your career.
[00:22:18] [SPEAKER_01]: You're going to be so much happier.
[00:22:20] [SPEAKER_01]: So Paul was like a major influence in that.
[00:22:23] [SPEAKER_01]: And it was interesting for me, for my personality.
[00:22:26] [SPEAKER_01]: That was super interesting.
[00:22:27] [SPEAKER_01]: That's what actually brought me to this point.
[00:22:30] [SPEAKER_01]: The other thing was probably 10 years later, yeah, probably 10 years later, there was a
[00:22:36] [SPEAKER_01]: guy named Dave White.
[00:22:37] [SPEAKER_01]: I was working on DARPA-based systems.
[00:22:41] [SPEAKER_01]: So DARPA is a big research organization.
[00:22:44] [SPEAKER_01]: And he looked at me and said, you know, you're doing all this great research, but you're
[00:22:48] [SPEAKER_01]: actually a decent leader.
[00:22:50] [SPEAKER_01]: So why don't you start to lead some things?
[00:22:54] [SPEAKER_01]: And that was a little kick that I needed in my career.
[00:22:58] [SPEAKER_01]: And it's been great.
[00:23:00] [SPEAKER_01]: You know, I often say this to my teams, you know, I may forget some details of a project
[00:23:09] [SPEAKER_01]: here and there, but I always remember the people that we've worked on those programs together.
[00:23:15] [SPEAKER_01]: And, you know, throughout my career, I just remember those events.
[00:23:20] [SPEAKER_01]: And sometimes they're, you know, like those critical events, you know, like what are we
[00:23:25] [SPEAKER_01]: going to do?
[00:23:26] [SPEAKER_01]: COVID's hitting.
[00:23:27] [SPEAKER_01]: How are we going to support our clients?
[00:23:28] [SPEAKER_01]: How are we going to get people with their computers?
[00:23:31] [SPEAKER_01]: You know, whatever it happens to be.
[00:23:33] [SPEAKER_01]: But it's not about, you know, the technical part.
[00:23:36] [SPEAKER_01]: It's not about anything.
[00:23:38] [SPEAKER_01]: What I remember is the teams of people and the interactions with the people coming together.
[00:23:44] [SPEAKER_01]: And so from my perspective, yeah, there's lots of achievements that the teams have made,
[00:23:49] [SPEAKER_01]: but it's really those relationships and those interactions with people have been the most
[00:23:54] [SPEAKER_01]: rewarding part of this career in AI and security.
[00:23:59] [SPEAKER_00]: What a fantastic answer and a perfect moment to end on.
[00:24:02] [SPEAKER_00]: I think we often get carried away talking about the technology, the AI, but as you rightly
[00:24:07] [SPEAKER_00]: pointed out there, the role of people and everybody coming together, collaborating and
[00:24:11] [SPEAKER_00]: bringing those things to life.
[00:24:12] [SPEAKER_00]: So important.
[00:24:13] [SPEAKER_00]: And equally naming those people that have had a real impact on you and your life there.
[00:24:18] [SPEAKER_00]: They probably know you very well, but probably unaware of just how much you have impacted
[00:24:23] [SPEAKER_00]: their life, your life and your career.
[00:24:26] [SPEAKER_00]: So, so important that we recognize that.
[00:24:28] [SPEAKER_00]: And I genuinely hope that they get to hear that heartfelt message.
[00:24:32] [SPEAKER_00]: And for everyone listening, just wanting to find out more about security AI and maybe
[00:24:38] [SPEAKER_00]: connect with you or your team.
[00:24:40] [SPEAKER_00]: Where would you like to point everyone?
[00:24:41] [SPEAKER_01]: Well, obviously you can go to our website security.ai and that's security with an I.
[00:24:48] [SPEAKER_01]: So, and lots of information there.
[00:24:51] [SPEAKER_01]: There's a certification that you can take in terms of just learning about generative
[00:24:55] [SPEAKER_01]: AI.
[00:24:56] [SPEAKER_01]: I took it and learned a few things.
[00:24:58] [SPEAKER_01]: They get to do that.
[00:24:59] [SPEAKER_01]: You can connect with me on LinkedIn, Jack Berkowitz.
[00:25:04] [SPEAKER_01]: And happy to do that.
[00:25:06] [SPEAKER_01]: And, and look for us at a few conferences coming up.
[00:25:10] [SPEAKER_01]: You know, you can always find us at, you know, those, those normal data conferences or security
[00:25:15] [SPEAKER_01]: conferences.
[00:25:16] [SPEAKER_01]: For Europe, I'll be in Brussels at the IAPP conference in, in, in November.
[00:25:24] [SPEAKER_01]: And if there's any privacy and security professionals there, love to see it in Brussels.
[00:25:30] [SPEAKER_01]: I used to live in Brussels as well and love that city.
[00:25:34] [SPEAKER_00]: Awesome.
[00:25:35] [SPEAKER_00]: I'll connect with you on LinkedIn as well, because I am in the US a lot for tech conferences,
[00:25:40] [SPEAKER_00]: but I think my next to a VMware in Barcelona and Lenovo is in Seattle, I believe.
[00:25:46] [SPEAKER_00]: So hopefully our paths will cross in the near future too.
[00:25:50] [SPEAKER_00]: But just love chatting with you today about proven strategies for implementing AI risk
[00:25:54] [SPEAKER_00]: management frameworks, enhancing security through AI governance and better ways to evaluate
[00:26:01] [SPEAKER_00]: new AI capabilities, but bringing it to life with real world examples from some of those
[00:26:05] [SPEAKER_00]: conversations you've been having with clients and prospects that are all advancing data
[00:26:10] [SPEAKER_00]: and AI implementations.
[00:26:12] [SPEAKER_00]: I think everyone's on that same journey right now, but also for just wrapping it up with
[00:26:16] [SPEAKER_00]: a very human story of gratitude and the people you're grateful towards for putting you where
[00:26:20] [SPEAKER_00]: you are now.
[00:26:21] [SPEAKER_00]: But thanks for sharing that powerful story with me today.
[00:26:24] [SPEAKER_01]: Thank you.
[00:26:25] [SPEAKER_01]: It's really enjoyed the conversation.
[00:26:27] [SPEAKER_00]: I think Jack has given us valuable insights today into how organisations can navigate
[00:26:32] [SPEAKER_00]: the growing demands of AI innovation without sacrificing security or compliance.
[00:26:41] [SPEAKER_00]: And from the importance of a strong AI governance framework to the proactive measures needed
[00:26:45] [SPEAKER_00]: to protect data, I think it's clear that the road ahead requires both strategic foresight
[00:26:52] [SPEAKER_00]: and operational precision.
[00:26:53] [SPEAKER_00]: But the big question is how will your business balance innovation with risk management?
[00:27:00] [SPEAKER_00]: And what steps will you take to ensure AI transparency and trust across your organisation?
[00:27:07] [SPEAKER_00]: I think as Jack has shown today, the answers lie in understanding not only the technology,
[00:27:13] [SPEAKER_00]: but also the ethical and regulatory landscape shaping the future of AI.
[00:27:18] [SPEAKER_00]: But as always, you know the drill.
[00:27:19] [SPEAKER_00]: Let me know your thoughts.
[00:27:20] [SPEAKER_00]: Tech blog writer outlook.com.
[00:27:22] [SPEAKER_00]: Send me a quick message on LinkedIn, Instagram, Twitter, just at Neil C Hughes.
[00:27:26] [SPEAKER_00]: And we'll keep this conversation going.
[00:27:29] [SPEAKER_00]: But that's it quitting time for me.
[00:27:30] [SPEAKER_00]: Time for me to knock off for the day, but I'll return again tomorrow.
[00:27:34] [SPEAKER_00]: Thanks for listening today and hopefully I will speak with you all again tomorrow.
[00:27:37] [SPEAKER_00]: Bye for now.

