With AI fundamentally reshaping the cloud, what does this mean for the future of security?
In this episode, Nataraj Nagaratnam, CTO of Cloud Security at IBM, joins us to dive into the evolving dynamics of cloud security in the age of AI. As organizations integrate AI into their cloud environments, the traditional lines of responsibility for security are being redrawn, raising new questions about who safeguards what and how emerging risks are addressed.
Nataraj unpacks how AI-generated code is introducing unique vulnerabilities, with recent studies highlighting that nearly half of organizations are concerned about the security risks posed by AI-driven development.
My guest explains how the complex, multilayered enterprise AI stack—from applications and models to data and infrastructure—requires a nuanced, data-centric approach to security. This approach includes assessing data sensitivity, managing keys, and implementing tailored controls, especially as unstructured data and machine learning models bring fresh security challenges.
The shared responsibility model is also undergoing a transformation, with Nataraj outlining a shift from a simple customer-provider dynamic to one that includes AI model providers and data lineage.
With industries like healthcare and finance running critical systems in the cloud, regulatory frameworks are tightening, and transparency in model development and data lineage is increasingly emphasized.
Nataraj explores how IBM is working to normalize these controls across sectors, enabling compliance and resilience in a diverse range of cloud environments.
Automation emerges as a cornerstone of Nataraj's strategy, helping organizations maintain secure, compliant cloud environments in both deployment and operations. He illustrates a dual-phase approach—automation for initial secure deployments and ongoing compliance monitoring—to ensure that security remains robust as systems evolve.
Finally, we discuss the future of cloud security, including the need for AI governance, third-party risk management, and transparency in model use, all critical as the AI-driven Supercycle unfolds.
What will the future of cloud security look like as AI becomes an integral part of our digital infrastructure? Tune in to explore Nataraj's vision and strategies, and join the conversation—how is your organization preparing for the new era of AI and cloud security?
[00:00:03] As AI continues to reshape the cloud landscape, are we truly prepared to handle the new security complexities that it might introduce? Well, my guest today is CTO of Cloud Security at IBM, and I've invited him to join us on the podcast to discuss how AI-driven development is almost transforming the cloud-shared responsibility model, and with it, creating fresh security and compliance challenges.
[00:00:31] So, from the enterprise AI stack to data-centric security strategies, my guest will unpack how organizations can navigate these evolving responsibilities, particularly at a time where AI-generated code and complex applications are becoming the norm.
[00:00:49] So, we'll explore automation's role in safeguarding these environments and also the importance of managing third-party risks in a way that fosters transparency.
[00:00:59] And with so many changes on the horizon, could this new model redefine our approach to security in the cloud? Well, that is what we're going to be talking about today. So, enough scene setting for me. Let's get my guest onto the podcast.
[00:01:13] So, a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do?
[00:01:21] I am Nataraj Nagarathnam. As an IBM fellow, I focus on solving enterprise security challenges, now more across cloud and AI. And as CTO for cloud and AI security, I lead the technical strategy and solutions to help our customers and differentiating the market.
[00:01:39] Well, there's so much I want to talk with you about today because it seems everywhere we look, we're seeing so much hype around AI, but in particular how AI is transforming almost every aspect of the cloud, including security. So, in your view, how have you seen AI-driven development change the security landscape in cloud environments, particularly with AI-generated code introducing new risks? What are you seeing here?
[00:02:09] Yeah, definitely seeing usage of Gen. AI in the context of code. Code assistance is an important area.
[00:02:16] And our customers and IBM ourselves are seeing significant benefit in that space, for sure. And the other thing that we need to keep in mind is when we say code, typically we jump into thinking about business logic that we write in code.
[00:02:31] But more and more with cloud deployment, it's also infrastructure as code, compliance as code, where it could be configuration when you deploy, that also should be considered code. And so, there is a rich change here, as well as usage today, that's out there.
[00:02:45] I'd said there's an interesting Stanford University study from 2023 that found that users who interact with these AI code assistant found that they write less secure code, significantly less secure code. But on the other hand, they seem to be believing that they write more secure code.
[00:03:06] So, it seems to be humans are starting to hallucinate when you work with Gen. AI. But it's just a beginning, you know, the technology is going to get better. And with all the agentic AI, it's going to be a fascinating world where I can imagine two AI agents talking to each other to quote, pair abuse.
[00:03:22] And from conversations that I'm having on this daily tech podcast and some of the things I'm seeing on LinkedIn, it does seem that organizations are becoming increasingly concerned, cautious or wary about vulnerabilities created by AI generated code.
[00:03:38] But can you elaborate on some of those specific risks that you see rising from AI and cloud security and ultimately how organizations can better prepare and address these challenges?
[00:03:49] Absolutely. This is a great question and important one.
[00:03:52] Because typically when you see up there, when people talk AI or Gen. AI, they immediately jump to models and large language models.
[00:04:00] But if you step back a second in enterprise AI, when they are using and delivering outcomes and value to their end customers or to their own business, we have to look at it as a stack.
[00:04:12] So, in an enterprise AI stack, deployed across hybrid cloud, it has AI applications.
[00:04:18] Like chat bots.
[00:04:20] Like chat bots.
[00:04:20] Models they use.
[00:04:21] Like IBM's Granite or ChatGPT or Lama.
[00:04:26] Data.
[00:04:27] This is an important one.
[00:04:29] They use and infuse enterprise data either into the models or augment them into models.
[00:04:34] And all of these deployed on infrastructure.
[00:04:36] So, if you look at the stack across applications, models, data, and infrastructure, risks are prevalent in each layer.
[00:04:44] Somewhat new.
[00:04:45] So, like model risks are new.
[00:04:46] The airline example that we all hear, the headlines that we saw last year, where it gave incorrect travel refund information.
[00:04:53] It's a risk from the emanating from the model, but it could have a reputational damage.
[00:04:58] Then there are risks that are more emphasized, like database.
[00:05:02] So, data leakage has been there since web or even before, but it gets even more in there because data is the fuel for AI.
[00:05:12] Right?
[00:05:12] So, it's at the center from training to inferencing or fine tuning.
[00:05:16] It is at the core.
[00:05:17] So, protecting that and is critically important.
[00:05:20] And then the risks from infrastructure should never be ignored.
[00:05:25] It's not like something AI is brand new.
[00:05:27] You start from there.
[00:05:27] All the good hygiene that needs to be applied is important.
[00:05:32] This is where, like, security guardrails.
[00:05:34] Then you have AI deployments that can assess your posture across models, application, data, and infrastructure became important.
[00:05:42] And we have actually built that in into IBM Cloud, what we call Security and Compliance Center.
[00:05:47] So, with the rise of AI in cloud environments, I'm curious here.
[00:05:51] How do you see the lines blurring between cloud developers and security architects?
[00:05:57] And is there anything else organizations should be doing to navigate these evolving shared responsibility models,
[00:06:04] especially as they scale AI within existing architectures?
[00:06:07] Because we see a lot in general corporate America and beyond that there is those information and responsibilities all in silos.
[00:06:15] And we need to be talking more together.
[00:06:18] But what are you saying here?
[00:06:19] Because it's not just about technology, right?
[00:06:21] It's ultimately people need to work through it and the processes that help.
[00:06:25] So, when you look at developers, cloud developers, and the security architects dimension of it,
[00:06:30] one of the key things and the paradigm is around how do you build and deploy cloud applications and AI workloads
[00:06:39] by secure as security built in and secure by default.
[00:06:43] So, when I roll up my sleeves and work on solutions with customers, et cetera,
[00:06:47] we build these reference architectures and say,
[00:06:49] hey, this is how my payment application is going to deploy on OpenShift cluster,
[00:06:54] on integrated my enterprise system, and it needs to do logging,
[00:06:57] it needs to do data encryption, et cetera, et cetera.
[00:07:00] When you look at that wired architectural diagram,
[00:07:03] now, hardifying them is an important thing where security needs to be built in.
[00:07:08] So, these, what we call deployable architectures in IBM Cloud,
[00:07:12] when we have hardified these architectures that are deployable with a single click,
[00:07:17] its security built in from day one become important.
[00:07:20] And we see that as a critical part of DevSecOps and the overall development process.
[00:07:25] The other aspect to you asking about shared responsibility,
[00:07:29] I think when it comes to AI, this extends beyond infrastructure,
[00:07:33] because to date, we have been talking about shared responsibility model between a CSP and a customer.
[00:07:38] But now, with a mix of model providers,
[00:07:41] because you may be getting your large language models or enterprise model from somebody else,
[00:07:46] like it could be a generic large language model or a domain-specific,
[00:07:50] like a healthcare-specific model or a financial risk model, et cetera,
[00:07:53] that customers may be building.
[00:07:55] So, who are the model providers and where is the data coming from?
[00:07:58] So, that needs to be increasingly infused into this.
[00:08:02] And there's a sophistication here we need to work through.
[00:08:05] And of course, data theft will remain a critical issue in cloud security
[00:08:09] and keep many people awake at night.
[00:08:11] So, how are you seeing this impacting the shared responsibility model?
[00:08:15] And are there any key security considerations organizations need to be aware of,
[00:08:20] especially when handling sensitive data in AI-enabled cloud environments?
[00:08:25] And I appreciate it's a huge question,
[00:08:27] but it's something at the top of mind for a lot of people, isn't it?
[00:08:31] Absolutely.
[00:08:32] Look, I fundamentally believe in a data-centric approach.
[00:08:36] I mean, when we talk about risk-based approach to security, data is at the center of it.
[00:08:42] Because it's not like you need to protect anything and everything.
[00:08:45] If you do that, you'll have a fortnight, everything locked down,
[00:08:48] and you may not even open it to the web, right?
[00:08:50] But it's all about risk.
[00:08:51] So, when you look at the type of applications that deal with type of data,
[00:08:56] is it public data?
[00:08:58] Is it confidential data?
[00:09:00] Or is it sensitive that may even include PII or corporate trade secrets, as example, right?
[00:09:05] When you look at that spectrum of, I'm simplifying as public, confidential, and sensitive,
[00:09:12] taking an approach and say, hey, if it's public, no, I need to do this basic security hygiene.
[00:09:16] But if the data is lost, it's public data.
[00:09:18] You can get it from anywhere.
[00:09:19] It's okay.
[00:09:20] Whereas if it is sensitive all the way to the end,
[00:09:23] it's going to have significant business impact.
[00:09:26] It may lose your customer information and all of that,
[00:09:29] and impact privacy and regulatory requirements.
[00:09:31] Therefore, you need to have more control.
[00:09:33] So, as you go from public to confidential to sensitive, the controls increase.
[00:09:38] For example, when it comes to data encryption, right?
[00:09:43] Keeping confidentiality of data is important, right?
[00:09:47] As I always say, encryption is for amateurs and key management is for professionals.
[00:09:52] So, when you think about key management,
[00:09:54] it's one thing to encrypt and say, hey, cloud provider, you manage the key.
[00:09:59] That could be okay for public.
[00:10:01] But when it comes to confidential data, customers need to manage the keys.
[00:10:05] It could be what we call, what in the industry we call bring your own key.
[00:10:08] You bring the key, you manage the lifecycle.
[00:10:10] If you need to rotate it, you can rotate.
[00:10:12] But the keys are secured by what are called hardware security module
[00:10:15] that the cloud provider may be managing.
[00:10:18] But all the way to the end, when you have sensitive,
[00:10:20] we have seen working with customers like Bank of BNP, Paribar, Kaisha,
[00:10:25] and all regulated industries,
[00:10:26] they need to demonstrate that they have full control of the data and the keys.
[00:10:31] The only way to do it is what we call keep your own key with technologies that we have,
[00:10:36] that you can actually have full control from the hardware to the chip to the key, right?
[00:10:40] So data-centric approach is important.
[00:10:44] And now in the context of AI, it becomes even bigger challenge
[00:10:50] because it's not only the enterprise data, structured data,
[00:10:53] there's unstructured data that gets kind of fed into the AI in a rack pattern,
[00:10:59] retrieval augmented generation rack pattern, or even training that they may do.
[00:11:03] So models are new form of data.
[00:11:08] If data is in a relational database versus data is in a model,
[00:11:11] we need to think about how do you secure it,
[00:11:13] what kind of models to use, and so on.
[00:11:15] So long answer to your question, but data is definitely at the core of it.
[00:11:20] Taking a data-centric approach to say, based on that sensitivity of the data,
[00:11:26] how should we apply security controls and doing data governance around it
[00:11:31] is an important step and very, very critical.
[00:11:34] And I'm doing that with the various AI projects that we run with an IBM,
[00:11:38] as well as customers.
[00:11:39] And this approach is resonating quite well because that reflects a view on risk.
[00:11:45] And another topic we've got to discuss, of course,
[00:11:47] is that regulatory requirements are continuously evolving
[00:11:51] almost in parallel with the rise of AI in cloud environments too.
[00:11:55] And they seem to be having an increasing impact on security responsibilities,
[00:12:00] especially around the cloud.
[00:12:01] So, again, from what you're seeing here,
[00:12:04] how are these regulations influencing the shared responsibility model,
[00:12:08] and how should organizations adapt to meet both security,
[00:12:11] but also compliance needs too?
[00:12:14] Absolutely.
[00:12:15] I mean, what is happening out there, as we see,
[00:12:17] is when the cloud journey started, now more than a decade ago,
[00:12:23] customers started to look at less risky applications
[00:12:26] so that they can get the value of cloud.
[00:12:28] But over time, it has become that cloud is critical infrastructure.
[00:12:33] You have payments that are running in cloud.
[00:12:35] You have healthcare applications running in cloud.
[00:12:38] So the moment it is starting to become a critical infrastructure
[00:12:41] and that has either a national-level impact,
[00:12:47] if cloud goes down, that's an outage,
[00:12:49] or security breach,
[00:12:51] or a global-level impact to the business,
[00:12:53] this is where regulations come in
[00:12:55] because they are looking at the best of the consumer interest
[00:12:58] or the national interest as well,
[00:12:59] depending on the type of regulation.
[00:13:02] So what we are seeing is the significant focus
[00:13:07] on understanding these risks
[00:13:11] and codifying them into regulations
[00:13:14] is at the core of all these discussions.
[00:13:16] So what we did, given all the plethora of regulations out there,
[00:13:19] we worked with the enterprise banks
[00:13:22] that are part of our financial services council
[00:13:25] and created what we call a financial services control framework.
[00:13:29] And that actually codifies a normalization
[00:13:32] of what controls need to be done.
[00:13:34] And if you meet these controls,
[00:13:35] it will meet the regulatory needs, right?
[00:13:37] So not only it helps customers,
[00:13:41] the cloud provider, but third parties.
[00:13:43] Because if you think of what's happening out there
[00:13:45] when a bank, for example,
[00:13:47] is using a SaaS provider or an ISV,
[00:13:50] it takes around 80 to 24 months
[00:13:52] for each of them to validate an ISV.
[00:13:55] But if you think of how many banks and how many ISVs,
[00:13:58] now it's a significant timeline, right?
[00:14:00] So we have normalized it through this framework
[00:14:03] that we work with customers and with regulators.
[00:14:08] Now, customers are able to use,
[00:14:10] like BNP Paribas is able to use
[00:14:11] various independent SaaS vendors on IBM Cloud
[00:14:15] and get the value seamlessly
[00:14:16] and accelerate their innovation.
[00:14:18] And they've closed the loop with regulators.
[00:14:21] As a matter of fact, like three weeks back now,
[00:14:23] I was in New York City with 30 plus customers,
[00:14:26] another bunch of regulators and fintechs
[00:14:29] all in one room discussing this
[00:14:31] and how we are helping bring the ecosystem together.
[00:14:35] And if we were to zoom out just for a moment,
[00:14:38] I think automation is often presented
[00:14:40] as that solution to many of the security challenges
[00:14:42] in the cloud that we're talking about.
[00:14:44] So what role do you see automation playing
[00:14:47] in cloud compliance?
[00:14:48] And also how organizations can maybe leverage it
[00:14:52] to ensure that they're meeting
[00:14:53] both regulatory and security standards?
[00:14:56] Again, huge topic.
[00:14:58] I don't want to make it sound too easy,
[00:15:00] but there's a lot open debate here, isn't there?
[00:15:04] Yeah.
[00:15:05] Automation is the center of everything
[00:15:07] that's happening out there
[00:15:08] so that you can get the best out of it.
[00:15:10] So when it comes to in this context,
[00:15:12] now connecting the previous conversation to here,
[00:15:14] when you look at these industry controls,
[00:15:15] be it regulatory requirements or from risk,
[00:15:18] I mean, prescriptive controls,
[00:15:20] it's not just to say,
[00:15:21] how do you protect data?
[00:15:22] But it is how you go about implementing it
[00:15:25] in a particular cloud environment becomes important.
[00:15:28] So codifying that as prescriptive controls
[00:15:30] is first step,
[00:15:31] but automation is the end state to arrive at.
[00:15:35] I would say there are two phases to automation.
[00:15:37] There is day one and then there is day two.
[00:15:39] When it comes to day one,
[00:15:41] this is about how do you build
[00:15:42] and deploy secure and compliant applications
[00:15:46] in an automated manner.
[00:15:48] This is where it's not just creating a landing zone
[00:15:50] and putting your workloads there.
[00:15:52] You need to create secure landing zones
[00:15:54] that moment you create and deploy,
[00:15:56] it is secure right from the get go.
[00:15:58] This is where we codified deployable architectures,
[00:16:01] where we saw, say, from eight weeks
[00:16:04] it initially took for ASVs
[00:16:06] for them to understand these requirements
[00:16:07] and controls and codifying them.
[00:16:09] It's taking them through days now
[00:16:11] because it's completely automated
[00:16:13] with blueprints of these architectures
[00:16:15] that has security and complaints built in.
[00:16:17] That's day one.
[00:16:19] But moment you push the button and now deploy,
[00:16:22] bad things continue to happen.
[00:16:24] New vulnerabilities come up
[00:16:25] or you may drift.
[00:16:26] Somebody may open up your objects
[00:16:28] or bucket to the world with sensitive data.
[00:16:31] So you need to continuously monitor,
[00:16:34] detect these drifts
[00:16:35] and take actions and remediations
[00:16:37] as part of your DevSecOps.
[00:16:39] So continuous compliance and monitoring is important.
[00:16:43] Similarly, we provide security complaint center
[00:16:45] that can continuously access
[00:16:46] and monitor your posture
[00:16:47] and complaints in a continuous way.
[00:16:50] So to your question on automation,
[00:16:53] absolutely at the center
[00:16:54] and automation should be thought of
[00:16:56] across day one and day two
[00:16:57] that cuts across these set of development
[00:17:00] and deployment processes
[00:17:01] across cloud deployments.
[00:17:04] And I think many businesses right now
[00:17:06] are caught up in this AI gold rush of sorts.
[00:17:09] So there's more applications
[00:17:10] and permissions
[00:17:11] that are continuously added to cloud stacks.
[00:17:14] I would imagine that managing security risks
[00:17:16] will eventually become even more complex.
[00:17:19] So any strategies that you recommend
[00:17:20] for organizations to better understand
[00:17:23] and mitigate their most significant vulnerabilities
[00:17:26] in this context?
[00:17:27] Because it's moving faster at the moment,
[00:17:30] but it's only going to get faster.
[00:17:31] We're going to keep adding more.
[00:17:32] Something's got to change there, right?
[00:17:35] Yeah.
[00:17:35] I would say there's a three-pronged approach here
[00:17:38] that I advocate for
[00:17:41] and we've seen success in customers.
[00:17:44] Define, implement, assess.
[00:17:48] What I mean by that is
[00:17:49] you need to define taking a risk-based approach
[00:17:51] and a data governance approach
[00:17:53] to AI and cloud workloads.
[00:17:55] You need to define what your controls are.
[00:17:57] To the earlier point on
[00:17:58] if it is sensitive workloads
[00:18:00] dealing with sensitive data,
[00:18:01] you have different set of controls to implement.
[00:18:03] Whereas if it's public,
[00:18:04] you may have less controls to implement.
[00:18:06] Nonetheless, defining that upfront
[00:18:09] so that as these applications get built in,
[00:18:12] built,
[00:18:13] that you then and across these use cases,
[00:18:15] you know what to do
[00:18:15] and how to go about it is important.
[00:18:17] That's kind of your defining
[00:18:19] the policy control.
[00:18:20] Next is you need to implement them.
[00:18:22] Implement them in a prescriptive way
[00:18:24] that not only the compliance
[00:18:26] and security guides can understand,
[00:18:27] but actually developers can implement
[00:18:29] because developers are not security experts.
[00:18:32] They know,
[00:18:33] hey, if you tell me network
[00:18:34] and close down my VPC
[00:18:36] or use encryption,
[00:18:38] use keys,
[00:18:39] they understand that, right?
[00:18:40] So translating those controls
[00:18:42] into prescriptive implementation
[00:18:44] that they can easily implement
[00:18:46] is important.
[00:18:47] This is where we provide
[00:18:48] curated set of services
[00:18:49] in IBM Cloud
[00:18:50] that surround,
[00:18:52] that can be integrated,
[00:18:53] be it with containerized workloads,
[00:18:55] be it with virtual workloads,
[00:18:56] be it an open shift
[00:18:58] across hybrid cloud
[00:18:59] or SAP
[00:19:00] or VMware workloads,
[00:19:01] et cetera, et cetera,
[00:19:02] come into play.
[00:19:03] And third,
[00:19:04] around assess.
[00:19:05] This is,
[00:19:06] like I said earlier,
[00:19:07] it's about continuous compliance,
[00:19:08] continuous monitoring.
[00:19:10] So,
[00:19:10] how do you look at this
[00:19:12] in a day two operation
[00:19:13] so that you can not only,
[00:19:14] that you stay compliant,
[00:19:16] but if you detect any drifts,
[00:19:17] you take action.
[00:19:18] And if there is any new vulnerabilities
[00:19:20] out there
[00:19:21] or new risks
[00:19:22] that you need to go back
[00:19:23] and redefine your controls
[00:19:24] or increase your controls,
[00:19:25] you can do that as well.
[00:19:27] So,
[00:19:27] define,
[00:19:28] implement,
[00:19:29] assess is an easy way
[00:19:30] to think about
[00:19:31] the strategic approach here
[00:19:33] that I would recommend
[00:19:34] customers look at.
[00:19:36] And as we edge closer
[00:19:37] to life in 2025
[00:19:39] and continuously
[00:19:40] and begin to look ahead,
[00:19:43] how do you foresee
[00:19:44] the cloud shared
[00:19:44] responsibility model
[00:19:46] evolving over 2025
[00:19:48] and the next few years,
[00:19:49] particularly as AI
[00:19:50] continues to disrupt
[00:19:52] traditional security frameworks
[00:19:53] and again,
[00:19:55] I know it's a virtual crystal ball
[00:19:57] I'm asking you to look through here,
[00:19:58] but anything else
[00:19:59] that businesses
[00:19:59] should be preparing for
[00:20:01] in terms of security,
[00:20:03] compliance,
[00:20:03] and responsibility?
[00:20:05] Yeah,
[00:20:06] I think over time
[00:20:07] we have gotten better
[00:20:08] with shared responsibility
[00:20:09] in the context
[00:20:09] of cloud infrastructure,
[00:20:11] but what is going to be
[00:20:13] even more significantly
[00:20:14] an increased focus
[00:20:16] is on third-party risk.
[00:20:18] Third-party risk
[00:20:19] not only in like
[00:20:19] SaaS providers,
[00:20:21] but increasingly
[00:20:21] around model providers
[00:20:23] as you have more
[00:20:24] of these large language models
[00:20:26] or domain-specific models
[00:20:28] that customers
[00:20:29] are going to use
[00:20:29] and infuse into AI stack,
[00:20:32] then a focus on
[00:20:34] who's my model provider?
[00:20:35] Where is the data lineage?
[00:20:37] How big,
[00:20:38] where is this model built?
[00:20:40] What data did it go to?
[00:20:41] It's like looking
[00:20:42] at a nutrition label,
[00:20:43] right?
[00:20:43] When you consume
[00:20:44] a drink or food,
[00:20:46] you're looking at,
[00:20:47] hey,
[00:20:48] what's the carb,
[00:20:48] what's the sugar content,
[00:20:49] then all the good stuff.
[00:20:51] You need to have
[00:20:52] that level of transparency.
[00:20:53] That's why at IBM
[00:20:54] we advocate for
[00:20:55] open approach to AI
[00:20:57] where there is clarity
[00:20:59] and transparency
[00:20:59] on what data went in
[00:21:01] and indemnification
[00:21:02] of using these models
[00:21:04] so that you know
[00:21:05] what you're integrating with.
[00:21:07] But because if you don't know,
[00:21:08] you don't know
[00:21:08] what you're dealing with.
[00:21:09] So this is going to be
[00:21:10] a very hard topic
[00:21:12] and increasing focus
[00:21:13] on how the shared responsibility
[00:21:16] is going to bring
[00:21:17] a dimension
[00:21:18] around model providers.
[00:21:19] That's one.
[00:21:20] The other part is
[00:21:21] from a business's perspective,
[00:21:22] in addition to models
[00:21:24] and governance around it,
[00:21:26] there's going to be
[00:21:26] increasing focus
[00:21:27] on AI governance
[00:21:27] and data governance
[00:21:28] on what data
[00:21:30] is ready for training
[00:21:31] or what can be used
[00:21:33] with augmented generation.
[00:21:34] Where is it going?
[00:21:35] Are you running
[00:21:36] these models on-premise?
[00:21:37] Are you running it
[00:21:38] with a trusted provider
[00:21:39] like IBM?
[00:21:40] And so on and so forth.
[00:21:41] So those kind of
[00:21:42] governance guardrails
[00:21:44] are going to be
[00:21:45] inherent part
[00:21:46] of this process
[00:21:46] as you look at it.
[00:21:47] This is where
[00:21:48] as we look at AI governance,
[00:21:50] we have really
[00:21:50] technologies like
[00:21:51] what's next governance.
[00:21:53] You look at,
[00:21:54] hey,
[00:21:54] where is shared AI?
[00:21:55] What are my model providers?
[00:21:57] What data went into this?
[00:21:59] What's the lineage?
[00:21:59] We can provide
[00:22:00] those insights
[00:22:02] and automate that
[00:22:03] also in terms of
[00:22:04] security guardrails
[00:22:05] with cloud
[00:22:06] or technologies we have.
[00:22:08] And I suspect
[00:22:09] for everybody listening,
[00:22:11] including myself here,
[00:22:12] we all feel this pressure
[00:22:13] of almost having to be
[00:22:15] in a state
[00:22:15] of continuous learning.
[00:22:17] So a question
[00:22:18] I've got to ask you
[00:22:19] is someone right
[00:22:19] in the heart
[00:22:20] of this space
[00:22:21] that is leading the way.
[00:22:22] How or where
[00:22:23] do you self-educate?
[00:22:24] Any tips around that?
[00:22:26] Oh, yeah.
[00:22:27] In this world
[00:22:28] and ever-changing,
[00:22:28] especially in AI,
[00:22:29] it changes every day, right?
[00:22:31] Yeah.
[00:22:31] So keeping up with it
[00:22:32] is not easy,
[00:22:33] but the way
[00:22:36] I approach it is,
[00:22:37] I would say,
[00:22:38] again,
[00:22:38] I keep it in two or three
[00:22:40] simple way
[00:22:41] I think about it.
[00:22:42] Number one,
[00:22:42] learn the basics.
[00:22:43] So then Gen AI came up,
[00:22:45] so to say,
[00:22:45] of course,
[00:22:46] with IBM
[00:22:46] and we are reading
[00:22:47] many of that.
[00:22:48] Learning that,
[00:22:49] not only,
[00:22:50] I may not be
[00:22:51] a mathematical,
[00:22:52] mathematician
[00:22:53] to understand
[00:22:54] how the matrices work
[00:22:55] and within LMs
[00:22:56] and all of that,
[00:22:57] but how can it be used?
[00:22:59] So learning the basics
[00:23:01] from articles
[00:23:02] or videos
[00:23:03] or lectures
[00:23:03] from Stanford University
[00:23:05] or IBM Coursera,
[00:23:06] et cetera,
[00:23:06] et cetera,
[00:23:07] right?
[00:23:07] So if you think about it,
[00:23:08] there is plenty
[00:23:09] of materials out there.
[00:23:10] So learning from that
[00:23:11] is important.
[00:23:12] So learn the basics.
[00:23:13] Next is get hands-on
[00:23:15] because if it is
[00:23:17] only in theory,
[00:23:18] it doesn't help me.
[00:23:19] Personally,
[00:23:19] I need to get my hands dirty.
[00:23:21] I need to work with systems
[00:23:22] so that I can understand
[00:23:23] the pain
[00:23:24] or the value
[00:23:25] to developers
[00:23:26] and architects.
[00:23:27] So I do that
[00:23:28] getting hands-on.
[00:23:29] So at IBM,
[00:23:30] we have these brilliant
[00:23:31] what's the next challenge
[00:23:33] that we run
[00:23:33] across the company
[00:23:35] where we get
[00:23:36] our hands dirty,
[00:23:37] we get some time
[00:23:37] to look at
[00:23:38] and build some solutions
[00:23:39] using latest technologies.
[00:23:41] So we self-educate
[00:23:42] through that as well.
[00:23:43] So the company
[00:23:44] also encourages.
[00:23:45] That's an important
[00:23:45] part of the ingredient.
[00:23:47] And third,
[00:23:48] I work with customers.
[00:23:49] I work with internal
[00:23:50] AI projects
[00:23:51] on a daily basis.
[00:23:52] That way,
[00:23:53] I solve the problem,
[00:23:54] then understand
[00:23:54] what the gaps are
[00:23:56] that we can build
[00:23:57] you capabilities around it.
[00:23:58] So learning the basics,
[00:24:00] getting hands dirty,
[00:24:01] and being practical
[00:24:02] with real-world
[00:24:03] deployment customers
[00:24:04] and our own internal
[00:24:06] projects
[00:24:07] as line zero
[00:24:08] are an important element
[00:24:09] of continuous learning here.
[00:24:12] Well,
[00:24:12] we've covered so much
[00:24:13] in a short amount
[00:24:14] of time today.
[00:24:15] And anyone listening
[00:24:16] just wants to
[00:24:16] dig a little bit deeper.
[00:24:17] Maybe they want to
[00:24:18] connect with you
[00:24:19] or read more
[00:24:20] about your work
[00:24:20] and everything
[00:24:21] that you're doing
[00:24:22] at IBM.
[00:24:23] Is there anywhere
[00:24:23] in particular
[00:24:24] you'd like to point
[00:24:25] everyone listening?
[00:24:26] Sure.
[00:24:26] IBM.com slash cloud.
[00:24:28] Easiest way to go.
[00:24:29] We'll have plenty
[00:24:30] of material right from there
[00:24:31] across AI
[00:24:32] and hybrid cloud workloads
[00:24:34] and how we are focused
[00:24:35] on this entire space.
[00:24:37] Well,
[00:24:38] today,
[00:24:38] I think we covered
[00:24:39] so much
[00:24:39] from how data theft
[00:24:40] is impacting
[00:24:41] the security considerations
[00:24:43] for this model,
[00:24:44] the rise of regulatory
[00:24:45] requirements
[00:24:46] and its impact
[00:24:47] on shared responsibility
[00:24:48] in the workplace
[00:24:50] and also automation
[00:24:51] as a sole
[00:24:52] cloud compliance.
[00:24:54] So many big
[00:24:55] talking points
[00:24:56] I'd love everyone
[00:24:57] listening to
[00:24:57] what they're seeing
[00:24:59] out there as well
[00:25:00] but more than anything
[00:25:01] just thank you
[00:25:01] for starting
[00:25:02] this conversation
[00:25:03] with me today.
[00:25:04] Thank you so much.
[00:25:05] Thank you, Neil.
[00:25:06] Thanks for the opportunity.
[00:25:07] Great talking to you.
[00:25:07] I think our conversation
[00:25:09] today shed light
[00:25:10] on the critical shifts
[00:25:11] happening within
[00:25:12] cloud security
[00:25:13] as AI adoption
[00:25:14] continues to accelerate
[00:25:16] and will do
[00:25:17] next year.
[00:25:17] and as AI generated
[00:25:20] risks grow,
[00:25:22] the cloud security
[00:25:23] responsibility model
[00:25:24] is evolving with it
[00:25:26] to meet these challenges.
[00:25:28] But the bigger question
[00:25:29] is how do you see
[00:25:30] your organisation
[00:25:31] adapting to this
[00:25:33] new model?
[00:25:33] What new strategies
[00:25:35] might help manage
[00:25:36] risks of AI
[00:25:37] driven applications?
[00:25:38] Let me know
[00:25:39] your thoughts on this one
[00:25:40] and we will keep
[00:25:41] this discussion going.
[00:25:43] But that's it for today
[00:25:44] so please,
[00:25:45] any questions,
[00:25:46] reach out to me
[00:25:47] on LinkedIn,
[00:25:48] Twitter,
[00:25:49] Instagram,
[00:25:49] just at Neil C. Hughes.
[00:25:50] Other than that,
[00:25:51] I'll be back again tomorrow
[00:25:52] with another guest
[00:25:53] and another thought-provoking episode.
[00:25:55] Thank you for listening
[00:25:56] as always
[00:25:57] and hopefully
[00:25:58] I will get to speak
[00:25:59] with you all again tomorrow.
[00:26:00] Well,
[00:26:00] quit in time for me
[00:26:01] so bye for now.

