3065: Securing AI Governance With Optiv: Ensuring Fairness Across Algorithms
Tech Talks DailyOctober 23, 2024
3065
41:4933.5 MB

3065: Securing AI Governance With Optiv: Ensuring Fairness Across Algorithms

Are we truly harnessing the full potential of AI, or are we missing critical elements that could set us back? Join me as I explore a topic often overlooked yet vital for the ethical advancement of technology: AI bias.

On this episode of Tech Talks Daily, I'm joined by Jennifer Mahoney, Demand Delivery Manager of Data Governance, Privacy, and Protection at Optiv Security, Inc., who brings a wealth of knowledge on this subject.

Jennifer shares insights into the origins, manifestations, and impacts of AI bias, alongside strategies to mitigate its effects. She explains how biases in AI stem from human cognitive processes and societal stereotypes, influencing everything from facial recognition technologies to targeted advertising. We delve into various types of bias such as stereotyping, confirmation bias, and representation bias, examining the real-world repercussions that often remain unnoticed.

Jennifer also discusses the necessity of secure AI governance and the challenges organizations face in operationalizing security programs that integrate AI responsibly. From the pressures of rapid deployment to the need for diverse perspectives and thorough testing, she outlines what effective governance looks like in practice. Additionally, our conversation covers the evolving landscape of AI regulation, including the impact of the EU AI Act and the emergence of industry-specific guidelines.

As we navigate these complex issues, how can organizations balance innovation with ethical considerations to harness the full power of AI without falling prey to its pitfalls? What are the next steps for leaders looking to implement robust AI governance frameworks in their operations?

Join us as we explore these pressing questions and more, offering insights on creating a more equitable technological future.

[00:00:03] [SPEAKER_00]: Quick question for you all, how can we ensure that AI technologies that are transforming our world right now are both secure and fair?

[00:00:14] [SPEAKER_00]: Well today I want to learn more about a topic that doesn't get enough attention. Yep, I'm talking about AI bias.

[00:00:21] [SPEAKER_00]: And joining me today is my guest Jennifer Mahoney. She's the Demand Delivery Manager of Data Governance, Privacy and Protection at a company called OptivSecurity.

[00:00:34] [SPEAKER_00]: And with her expertise, today we're going to explore the subtle yet significant ways that biases can creep into your AI systems.

[00:00:44] [SPEAKER_00]: And the challenges that organizations are facing right now in mitigating these risks.

[00:00:49] [SPEAKER_00]: So my guest today will take us through the many factors that contribute to AI bias from stereotyping to feedback loop bias.

[00:00:57] [SPEAKER_00]: And share some real world examples where AI bias has had a real impact.

[00:01:03] [SPEAKER_00]: And we'll also discuss the importance of secure AI governance and the strategies that organizations can adopt to prevent these issues.

[00:01:12] [SPEAKER_00]: And by that I mean including implementing human in the loop models and creating diverse development teams.

[00:01:19] [SPEAKER_00]: And very often I think we get distracted by the shiny new technology and let's get it implemented, let's get it in.

[00:01:26] [SPEAKER_00]: And less about the human model. So I want to find out more about that.

[00:01:30] [SPEAKER_00]: And as the world moves forward with AI, how can we also balance rapid innovation with ethical considerations?

[00:01:39] [SPEAKER_00]: And what role will AI governance play in shaping that future too?

[00:01:43] [SPEAKER_00]: These are just a few of the things that I think will ensure our conversation is thought-provoking.

[00:01:49] [SPEAKER_00]: And also timely reminder that as AI continues to transform industries worldwide, there are a whole heap of responsibilities that come with that.

[00:01:59] [SPEAKER_00]: But enough for me. Let's get my guest on now.

[00:02:03] [SPEAKER_00]: So a massive warm welcome to the show.

[00:02:06] [SPEAKER_00]: Can you tell everyone listening a little about who you are and what you do?

[00:02:10] [SPEAKER_01]: Yeah, thank you for having me. I'm Jennifer Mahoney.

[00:02:13] [SPEAKER_01]: Hi, I am a practice manager at Optiv Security, working in our data governance privacy and protection practice, which includes AI strategy and governance work with our clients.

[00:02:25] [SPEAKER_01]: I've been with Optiv for about two and a half years now, and have a bit of a winding path to this place.

[00:02:31] [SPEAKER_01]: I started my career in the chemical regulatory compliance space.

[00:02:35] [SPEAKER_01]: I'm a chemist by training and started working out with employee health and safety issues, doing hazard communication and risk assessment of chemicals, writing the warning labels that you see on the products that you buy in the stores and things like that.

[00:02:50] [SPEAKER_01]: And the opportunity arose to transition into a privacy role about six years ago.

[00:02:55] [SPEAKER_01]: And then that has led me to where I am today.

[00:02:58] [SPEAKER_01]: So a little bit unusual compared to some of my colleagues, but I think it serves me well.

[00:03:05] [SPEAKER_00]: Absolutely love that.

[00:03:06] [SPEAKER_00]: And every day on this podcast, I try to explore and maybe demystify topics that people are talking about, not just in the tech industry, but in business in general.

[00:03:18] [SPEAKER_00]: And I think one of the big topics right now is understandably, AI.

[00:03:22] [SPEAKER_00]: We talk about the exciting opportunities around it.

[00:03:24] [SPEAKER_00]: We're also talking about the ethical use of AI.

[00:03:27] [SPEAKER_00]: But another topic that keeps coming up is around AI bias.

[00:03:32] [SPEAKER_00]: And it is becoming a significant concern across both the technological and business landscape.

[00:03:38] [SPEAKER_00]: But from your viewpoint, can you just explain the various factors that you see contributing to bias in AI systems?

[00:03:46] [SPEAKER_00]: Anything from stereotyping, confirmation bias, which we all see when we're searching for things online, and representation bias too.

[00:03:54] [SPEAKER_00]: What are you seeing here?

[00:03:57] [SPEAKER_01]: Yeah, exactly.

[00:03:58] [SPEAKER_01]: The concept of bias is such a big concern.

[00:04:01] [SPEAKER_01]: And the challenge is that bias is so embedded in the way we see the world, the way we have as humans been taught to think about things.

[00:04:12] [SPEAKER_01]: It's the way that we process information.

[00:04:14] [SPEAKER_01]: We kind of put things into boxes and take things as they come.

[00:04:17] [SPEAKER_01]: And the way that we consume information, consume the things we see in the world around us.

[00:04:22] [SPEAKER_01]: And you mentioned a couple types of bias and stereotyping.

[00:04:27] [SPEAKER_01]: That's the most common and obviously recognized for people to kind of relate to in our day to day.

[00:04:34] [SPEAKER_01]: Again, the way we process the world around us.

[00:04:37] [SPEAKER_01]: We have sayings that I've heard my teachers or my grandparents say that if you hear hoofbeats and you live in the US or you live in the UK, think horse, not zebra.

[00:04:49] [SPEAKER_01]: And if it walks like a duck and quacks like a duck, it must be a duck.

[00:04:52] [SPEAKER_01]: And to the stereotyping aspect is to believe unfairly that things with a particular characteristic are the same and to treat them as such.

[00:05:05] [SPEAKER_01]: Right. And while this is a podcast, you can't see me.

[00:05:11] [SPEAKER_01]: I'm over six feet tall and I'm a female.

[00:05:15] [SPEAKER_01]: Can you think of a question I might get asked a lot, even at this stage of my life, well into my career?

[00:05:20] [SPEAKER_01]: I get asked, did you play basketball?

[00:05:24] [SPEAKER_01]: And I did.

[00:05:25] [SPEAKER_01]: Yeah, I did.

[00:05:26] [SPEAKER_01]: But so did my sister.

[00:05:28] [SPEAKER_01]: My sister is about five, six, never gets asked that question.

[00:05:32] [SPEAKER_01]: And so stereotyping is the easy one.

[00:05:35] [SPEAKER_01]: Tall women must have played basketball.

[00:05:38] [SPEAKER_01]: The raising awareness part, talking about bias, like I said, stereotyping is kind of the easy one that folks recognize.

[00:05:44] [SPEAKER_01]: Like, yes, I do these things.

[00:05:47] [SPEAKER_01]: Right.

[00:05:47] [SPEAKER_01]: I have asked Jen if she played basketball.

[00:05:50] [SPEAKER_01]: But there are many more types of bias that we see manifest in in AI and confirmation bias.

[00:05:59] [SPEAKER_01]: You mentioned we see that manifest in our daily lives in the algorithms in our social media platforms and our news aggregator apps that feed us more information of the things that we have consumed in the past.

[00:06:14] [SPEAKER_01]: Things we have liked, people we have read or followed and the Netflix recommendations.

[00:06:22] [SPEAKER_01]: You just finished binging this show here.

[00:06:25] [SPEAKER_01]: Five more just like this that we think you would like.

[00:06:28] [SPEAKER_01]: And other biases in representation.

[00:06:32] [SPEAKER_01]: I will go into the details of all of these.

[00:06:35] [SPEAKER_01]: But for example, representation bias, cognitive and labeling bias, aggregation bias, where in in our in studies that mentioned I'm a chemist.

[00:06:46] [SPEAKER_01]: Right.

[00:06:46] [SPEAKER_01]: And took a whole lot of biology and a lot of research classes in the course of that and aggregation bias where we de-identify information and aggregate it in a way that can hide important differences in our samples and our studies.

[00:07:02] [SPEAKER_01]: And then apply that downstream to use cases that can mask the details that we need in order to get appropriate outcomes.

[00:07:13] [SPEAKER_01]: So, I think that's a lot of information about evaluation bias.

[00:07:14] [SPEAKER_01]: So, I think that's a lot of information about evaluation bias.

[00:07:14] [SPEAKER_01]: My husband is a psychologist and he studies work psychology.

[00:07:18] [SPEAKER_01]: He's an industrial organizational psychologist.

[00:07:21] [SPEAKER_01]: But psychologists have defined, I think it's about 180 or so human cognitive biases.

[00:07:32] [SPEAKER_01]: And these manifest daily in the way that we act, the way that we talk, the way that we think, the way that we perceive the world.

[00:07:38] [SPEAKER_01]: And then when we, as humans, collect data, train models, deploy AI, our biases are reflected in our product, our output.

[00:07:53] [SPEAKER_01]: And so, we see these in algorithmic biases that are reinforcing the things that we have brought to the table as humans.

[00:08:02] [SPEAKER_01]: So, we develop the model, we deploy the model, the use cases, the data that we have collected, the way that we have designed our studies.

[00:08:13] [SPEAKER_01]: Because it can industry and deploy an AI model against, for example, analyzing workforce makeup or salary over time.

[00:08:24] [SPEAKER_01]: What is the time frame that we're looking at?

[00:08:27] [SPEAKER_01]: In the STEM fields, women are increasing representation in the last couple of decades.

[00:08:36] [SPEAKER_01]: historically over time, it's been a very male-dominated field. Nursing was a male-dominated

[00:08:45] [SPEAKER_01]: field in the beginning. I think it was around in the mid-1800s is when things really flipped

[00:08:54] [SPEAKER_01]: to become a female-dominated field. And we still see that female domination while the numbers of

[00:09:05] [SPEAKER_01]: male nurses is increasing. Depending on the data that we're using, what is that representation?

[00:09:12] [SPEAKER_01]: And in what time frame are we looking at? What is the racial distribution of a sample? And what

[00:09:18] [SPEAKER_01]: are the use cases that we're applying our model against? There's a lot of things to think about.

[00:09:24] [SPEAKER_01]: And the first step is to recognize it, right? To recognize bias and then let alone to start

[00:09:31] [SPEAKER_00]: to take steps to mitigate that. Such important points. And we've seen in the workplace throughout

[00:09:37] [SPEAKER_00]: time, hiring managers unwittingly channel their inner bias by hiring people who look and sound

[00:09:44] [SPEAKER_00]: like them, have the same personality, skin color and background. And if we look at our own online

[00:09:51] [SPEAKER_00]: interactions, you mentioned the great algorithms in Netflix and Amazon and things there. And we're

[00:09:57] [SPEAKER_00]: almost constructing a world from the familiar in which there's nothing to learn. And we're just

[00:10:02] [SPEAKER_00]: almost, it's almost invisible propaganda and indoctrinating ourselves with our own ideas. And

[00:10:08] [SPEAKER_00]: there's a lot of talk at the moment about filter bubbles and the damage that's doing and how it's

[00:10:12] [SPEAKER_00]: stopping us from personal growth, et cetera. But just to shine a light on this some more, are there any

[00:10:19] [SPEAKER_00]: other real-world examples of where AI bias is leading to unintended or damaging outcomes, but

[00:10:26] [SPEAKER_00]: particularly in the context of security and privacy? Because I know this is an area that you're

[00:10:31] [SPEAKER_01]: very passionate about too, right? Yeah. The easy example, as you mentioned,

[00:10:37] [SPEAKER_01]: is in talent acquisition and the algorithms that are screening resumes. And just from the start,

[00:10:45] [SPEAKER_01]: it's not even the individual hiring the person that looks like them, but they don't even see the

[00:10:50] [SPEAKER_01]: candidate that might be diverse because our algorithm has shut it off into the ether before

[00:10:56] [SPEAKER_01]: it even gets to the human, right? But from a security and privacy perspective, privacy, certainly.

[00:11:02] [SPEAKER_01]: The primary example here is facial recognition technology. And the NIST, the U.S. National Institute

[00:11:12] [SPEAKER_01]: of Standards and Technology did a study that found higher false positives on facial recognition matching

[00:11:20] [SPEAKER_01]: on Black, Native, and Asian faces when attempting kind of that one-to-one identification of an image

[00:11:29] [SPEAKER_01]: and who is this individual. And on a study of kind of that one-to-many approach, higher false positives

[00:11:38] [SPEAKER_01]: for Black women in particular. And the significance of this is that, for example, Black women are then

[00:11:49] [SPEAKER_01]: now at higher risk or highest risk of being falsely accused of crime. And with facial recognition

[00:11:59] [SPEAKER_01]: technology, it is expanding. NFL stadiums here in the United States are using facial recognition

[00:12:07] [SPEAKER_01]: technology. There was a pilot project last year. This year, they're using it, I think, primarily

[00:12:13] [SPEAKER_01]: focused on workers entering the stadium after that broader pilot last year. And so we'll see where

[00:12:19] [SPEAKER_01]: that goes in the future. But airlines, I just returned from a trip to Europe last week and going through

[00:12:29] [SPEAKER_01]: customs, first of all, getting on the plane, going through customs. They are now using facial recognition

[00:12:35] [SPEAKER_01]: to match me to my passport and all that. And I actually took the opportunity to request manual

[00:12:42] [SPEAKER_01]: evaluation versus the computer, versus standing in front of the iPad. I asked a lot of questions and

[00:12:49] [SPEAKER_01]: the desk workers couldn't answer my questions in terms of how long is the information retained and

[00:12:55] [SPEAKER_01]: all that kind of stuff. So I imagine, well, I think I was probably the only person on my flight asking

[00:13:01] [SPEAKER_01]: those questions. But I'm a white female and don't have the same risk bias that other population groups

[00:13:13] [SPEAKER_01]: have experienced in their history, right? But it's still a risk to spread my digital footprint.

[00:13:22] [SPEAKER_01]: One of the challenges that we have in terms of recognition of bias is not necessarily evaluating

[00:13:32] [SPEAKER_01]: the thing that's in front of us. So from a security privacy perspective, it's easy to kind of see a

[00:13:39] [SPEAKER_01]: thing, whether it's an ad, a recommendation on Netflix, social media, clicking through that to kind of

[00:13:47] [SPEAKER_01]: get to a malicious activity behind it, phishing emails and things like that. But what is not present?

[00:13:55] [SPEAKER_01]: What's not there? What are we not seeing? And so when moving away from the privacy space,

[00:14:04] [SPEAKER_01]: an example that folks can kind of relate to in terms of bias recognition and the absence of things

[00:14:09] [SPEAKER_01]: is going back to the Black population, Black people are disproportionately suggested to have arrest

[00:14:19] [SPEAKER_01]: records. So they might see ads for bail bonds or payday loans, for example, and not see ads for certain

[00:14:30] [SPEAKER_01]: employment opportunities or housing assistance or opportunities. And so the challenge that we have as

[00:14:38] [SPEAKER_01]: humans and as humans and as a society is recognizing bias when it's present and also the absence of

[00:14:46] [SPEAKER_01]: information being presented because of our biases, right? Does that make sense?

[00:14:51] [SPEAKER_00]: Yeah, 100%. We've highlighted quite a few biases in the workplace and I suspect people listening outside

[00:14:57] [SPEAKER_00]: of the tech industry, they know all about them. And something else that's really a big topic we don't talk

[00:15:03] [SPEAKER_00]: about enough is black box AI. We don't know how AI reaches the conclusions that it does. These AI models

[00:15:10] [SPEAKER_00]: arrive at conclusions or decisions without providing any explanations as to even how they were reached.

[00:15:17] [SPEAKER_00]: It's going to be incredibly difficult to defend against this. So what are some of the best practices

[00:15:22] [SPEAKER_00]: that you'd recommend for organizations to prevent AI bias, particularly in early stages of data collection

[00:15:29] [SPEAKER_00]: and model development, which is where so many businesses are right now?

[00:15:34] [SPEAKER_01]: Absolutely. Transparency is going, I think, going to become increasingly important in the way that AI

[00:15:41] [SPEAKER_01]: is governed. The first steps to prevent bias, first of all, is raising awareness, training everyone in

[00:15:50] [SPEAKER_01]: your organization, not just your data scientists, but the business and the folks who will be on the

[00:15:54] [SPEAKER_01]: output side of things as well, to learn about AI, learn about risk, including bias. And at some point,

[00:16:03] [SPEAKER_01]: the way that I think about things, at some point, I came across a, it was like a worksheet

[00:16:11] [SPEAKER_01]: from the California Department of Health, I think it was. And they simplified things in the approach to

[00:16:18] [SPEAKER_01]: bias and AI into five steps. And I know I'm going to mess up the details, but the first of all

[00:16:24] [SPEAKER_01]: was acknowledge bias exists. We are human. And these biases that we deal with, and whether we acknowledge

[00:16:36] [SPEAKER_01]: them or not, has been part of what has kept us alive as a species for the last couple of hundred thousand

[00:16:43] [SPEAKER_01]: years, right? We all have biases and we have to admit it. First, that's step one. Step two,

[00:16:50] [SPEAKER_01]: self-reflection. Start with you. Think about the way you think. And if you're in the workforce,

[00:16:57] [SPEAKER_01]: you may have come across the opportunity for unconscious bias training. I've done it at several

[00:17:03] [SPEAKER_01]: points in my career as I change employers. Here's unconscious bias training. And there's a quote from

[00:17:11] [SPEAKER_01]: Adam Grant's book, Think Again, that says,

[00:17:15] [SPEAKER_01]: Listen to ideas that make you think hard, not just opinions that make you feel good.

[00:17:22] [SPEAKER_01]: And that bubbles up for me when I'm talking about bias and thinking about conversations about how to

[00:17:28] [SPEAKER_01]: recognize bias, that it's the thing, things make us feel good. We might skip over because again,

[00:17:38] [SPEAKER_01]: confirmation bias, it's a thing we recognize. That doesn't mean that output, that thing is the right

[00:17:44] [SPEAKER_01]: output for everybody, right? And so to think not just about ourselves. Well, first of all, to think

[00:17:50] [SPEAKER_01]: about ourselves, but also put ourselves in other people's shoes. And third is constructive uncertainty.

[00:18:01] [SPEAKER_01]: Question things that we might have put exclamation points after. The things we think we are certain of,

[00:18:09] [SPEAKER_01]: question those and keep taking another backward step to say, well, what about the thing that led me to

[00:18:17] [SPEAKER_01]: that conclusion? The thing that led me to that. And so when we're developing AI models and training models,

[00:18:22] [SPEAKER_01]: it's the algorithms in the models. It's the data that we're training. It's the study that gathered the

[00:18:28] [SPEAKER_01]: data. It's the hypothesis that drove the study. And right. So we keep stepping backwards and keep

[00:18:35] [SPEAKER_01]: questioning and keep thinking. And there's a approach to process improvement, lean methodology about the

[00:18:44] [SPEAKER_01]: ask why five times, right? So question things we thought we were sure of. And when we have a good team

[00:18:57] [SPEAKER_01]: governing our AI, it's a diverse team that is not just our data scientists that know the thing and know

[00:19:06] [SPEAKER_01]: how things are supposed to work. It's people asking, why do we do this? Why do we do it this way?

[00:19:10] [SPEAKER_01]: And why didn't we do something else? Let's see what I'm at for. Number four, be comfortable with being

[00:19:17] [SPEAKER_01]: uncomfortable. When we are asking why, when we are challenging people's deep-seated understanding

[00:19:24] [SPEAKER_01]: in how we came to a conclusion, we often get to kind of the things we think we know about ourselves

[00:19:31] [SPEAKER_01]: and the way that we see the world, right? So if we are surrounded with people who think the same as us

[00:19:38] [SPEAKER_01]: and have had the same experiences of us, what is there for us to learn? So we want diverse voices

[00:19:45] [SPEAKER_01]: being part of the conversation. And the fifth thing is learn, right? Learn about people consciously,

[00:19:51] [SPEAKER_01]: try to put yourself in new opportunities, new situations, new environments to remove your

[00:20:00] [SPEAKER_01]: stereotypes, eliminate your biases as much as we can, accept feedback, give feedback constructively

[00:20:08] [SPEAKER_01]: on the topic that's in front of us. And so we take these things and apply them to our development

[00:20:15] [SPEAKER_01]: and use of AI. And that leads us to what feedback mechanism are we using to learn? So we think

[00:20:24] [SPEAKER_01]: about a human in the loop, which is the kind of primary way that we think about humans being

[00:20:30] [SPEAKER_01]: involved from every step of the life cycle of AI from data selection, data cleansing, model

[00:20:39] [SPEAKER_01]: training, testing, model deployment and use, and at every opportunity to gather feedback, not just from

[00:20:47] [SPEAKER_01]: the experts, not just from the person who is very deep in the knowledge on how things work or are

[00:20:54] [SPEAKER_01]: supposed to work, but people who can come in and provide fresh eyes. Oftentimes in process improvement,

[00:21:02] [SPEAKER_01]: my newest hire is the person who takes us the most forward in our efforts to improve because they're

[00:21:08] [SPEAKER_01]: like, why? Well, that's just the way we've always done it, but why? Right? And so you need SME

[00:21:16] [SPEAKER_01]: knowledge, you need the subject matter experts, but having fresh eyes to be able to question why and give

[00:21:23] [SPEAKER_01]: feedback and having those experts kind of take that feedback at every stage of the life cycle is really

[00:21:30] [SPEAKER_01]: important to identifying and then mitigating whether it's bias, security, risk, or whatever else.

[00:21:38] [SPEAKER_00]: And I think every business and brand will have diverse range of customers around the world and

[00:21:45] [SPEAKER_00]: equally every problem in every organization will need different ways of thinking to solve the problems

[00:21:51] [SPEAKER_00]: that they have there. So how important do you think diversity of thought is in the AI governance process?

[00:21:59] [SPEAKER_00]: And are there any steps that organizations can take to ensure this wide range of perspectives are

[00:22:05] [SPEAKER_00]: considered and not just that loudest voice in the room that we probably remember? Hopefully that's

[00:22:10] [SPEAKER_00]: gone now. I'm not in those meeting rooms anymore, but how do we avoid that?

[00:22:16] [SPEAKER_01]: There are often loud voices in the room. The squeaky wheel gets the grease, right?

[00:22:20] [SPEAKER_01]: Yeah.

[00:22:21] [SPEAKER_01]: But no, as I've said, having diversity of thought, having the business represented across the organization,

[00:22:30] [SPEAKER_01]: that your AI governance function as a company should not be your AI experts and your data scientists.

[00:22:40] [SPEAKER_01]: It should be your business functions across the organization from customer service,

[00:22:48] [SPEAKER_01]: finance, finance, privacy, risk, security, etc. Marketing, HR, and having a good representation

[00:22:59] [SPEAKER_01]: across the organization to identify risk in how AI use and deployment can affect their little

[00:23:07] [SPEAKER_01]: center of the world. But also having those voices be a part of things from the start when you're

[00:23:14] [SPEAKER_01]: building your AI governance, because the business ultimately is going to own the risk. Those of us

[00:23:22] [SPEAKER_01]: on the corporate side of things in a privacy, security, risk, or legal function, our opinions can be

[00:23:31] [SPEAKER_01]: overridden by the needs of the business. And whether it's the bottom line, whether it's customer satisfaction,

[00:23:37] [SPEAKER_01]: whether it is public perception and reputation driving the decisions, business owns risk. And when it comes

[00:23:47] [SPEAKER_01]: to AI use and deployment, having the business be a part of the governance from the start, first of all,

[00:23:56] [SPEAKER_01]: introduces everyone to the folks who can provide expert guidance, introduces the experts to the

[00:24:02] [SPEAKER_01]: individuals who are going to be using the output of whatever the thing is. And having varying levels

[00:24:09] [SPEAKER_01]: of the technical expertise helps everyone to ask why and to be comfortable with asking why and with

[00:24:17] [SPEAKER_01]: learning from one another, making sure we understand why are we doing things? Why are we doing it this

[00:24:23] [SPEAKER_01]: way? Why are we comfortable with our outcome? And what is the value that why is the value important to the

[00:24:30] [SPEAKER_01]: business? It needs to be from a governance perspective across the organization and have representation

[00:24:36] [SPEAKER_00]: across the organization. And I'm curious from what you've seen here, any examples you can share or

[00:24:42] [SPEAKER_00]: insights you can share around how this human in the loop governance model can help mitigate AI bias

[00:24:49] [SPEAKER_00]: and improve that overall security? It's not all doom and gloom. There's a lot of positive change that can

[00:24:55] [SPEAKER_01]: be implemented here and make a real difference, right? Absolutely. So benefits of human in the loop.

[00:25:02] [SPEAKER_01]: First of all, we're human. So we're going to think about impact, think strategically about the why,

[00:25:12] [SPEAKER_01]: bring emotion and human centric thinking to the way that we evaluate the data that we're looking at,

[00:25:21] [SPEAKER_01]: the outputs that we receive from the AI system and think about, is this good and right? And does this

[00:25:28] [SPEAKER_01]: feel good? What's the gut telling us about what we are experiencing? And we can learn from our failures

[00:25:40] [SPEAKER_01]: as humans. Without feedback, an AI algorithm doesn't know it failed. So having humans in the loop to

[00:25:49] [SPEAKER_01]: recognize problems and provide the opportunity for redirection, retraining, cleaning up our data,

[00:25:58] [SPEAKER_01]: retraining the algorithm, whatever that is, that gives us the opportunity to, first of all,

[00:26:05] [SPEAKER_01]: identify the issue and then act on it. But there are risks too. Yeah.

[00:26:10] [SPEAKER_01]: I had mentioned the first benefit is, first of all, we're human. That's our biggest risk in this as well.

[00:26:16] [SPEAKER_01]: We're human. We make mistakes. We are emotional. We get tired. We're expensive. You have to train us.

[00:26:27] [SPEAKER_01]: You have to feed us, right? And to take those positives and negatives where they are very

[00:26:36] [SPEAKER_01]: closely related and then the two sides of the same coin, right? From a security perspective,

[00:26:42] [SPEAKER_01]: humans are our biggest strength. From an organizational perspective, our human capital is our greatest

[00:26:49] [SPEAKER_01]: resource. But they're also our biggest weakness. It just takes one click on a phishing email or one

[00:26:58] [SPEAKER_01]: wrong name on an email going out into the wild to create a problem. And from the AI risk mitigation,

[00:27:10] [SPEAKER_01]: bias mitigation, the opportunity for humans to identify the problem and give feedback so that

[00:27:18] [SPEAKER_01]: the issue can be addressed is kind of the first fail safe, right? But to actually do something with

[00:27:27] [SPEAKER_01]: that feedback, whether you see an output and it's like a thumbs up, thumbs down, somebody needs to be

[00:27:32] [SPEAKER_01]: looking at that feedback in order to act on it and make changes in a timely manner, right? So we've got

[00:27:40] [SPEAKER_00]: to take that next step as well. And there is so much excitement around AI at the moment and adoption

[00:27:46] [SPEAKER_00]: is increasing all around the world. And as organizations continue to operationalize their security programs

[00:27:53] [SPEAKER_00]: with AI, are there any other key challenges that you see them facing in balancing innovation and that

[00:28:00] [SPEAKER_00]: need to change with ethical considerations like bias prevention because it is quite a delicate balancing act,

[00:28:07] [SPEAKER_00]: isn't it?

[00:28:08] [SPEAKER_01]: And it's no different from any other challenge that we face as an organization. There are ethical

[00:28:16] [SPEAKER_01]: considerations. There's the idea that AI is going to take over my job. But what this all comes down to

[00:28:22] [SPEAKER_01]: is the same challenges with anything is time, money, and resources, right? Like anything else. So pressure

[00:28:34] [SPEAKER_01]: from the business doesn't always allow us time to do things the way we would ideally like to do them.

[00:28:42] [SPEAKER_01]: So we go through proof of concept, minimum viable product, having and run with that, right? And having

[00:28:52] [SPEAKER_01]: all of the ideal security measures, all the training, all of the ideal conditions for AI to be secure,

[00:29:02] [SPEAKER_01]: to do AI responsibly without impact of bias is impossible, right? And so having all the security

[00:29:15] [SPEAKER_01]: measures in place, our personnel resources that we depend on to provide that human in the loop,

[00:29:20] [SPEAKER_01]: monitoring and feedback are probably wearing other hats and they have other jobs. And so distraction

[00:29:29] [SPEAKER_01]: from being able to fully participate or carving out time in order to do the testing or provide the

[00:29:38] [SPEAKER_01]: feedback or evaluate feedback is one of our greatest challenges, right? So, but that's no different

[00:29:44] [SPEAKER_01]: from everything else that we face as an organization.

[00:29:48] [SPEAKER_00]: And I've got to say, as an ex-IT applications guy and former change manager, I've got to ask,

[00:29:55] [SPEAKER_00]: what role do you see things like ongoing testing and validation playing in maintaining the integrity of

[00:30:02] [SPEAKER_00]: these AI systems? And how can organizations better implement these practices? And one of the big reasons

[00:30:08] [SPEAKER_00]: I say that is testing and change management has always been the belts and braces of any IT strategy.

[00:30:15] [SPEAKER_00]: But even now, almost on a daily or weekly basis, I'm seeing more and more businesses fail because

[00:30:22] [SPEAKER_00]: all outages because of something didn't go through the right change procedure and a software update

[00:30:27] [SPEAKER_00]: took down half of the country. There was, there was a big one in the US. We won't mention their name

[00:30:31] [SPEAKER_00]: just a couple of days ago. So how do you see that planning out?

[00:30:37] [SPEAKER_01]: Yeah. And testing when I'm a consultant, right? And so when we are providing a statement of work for

[00:30:45] [SPEAKER_01]: a solution to our client and they don't like the price, testing is the first thing that gets cut,

[00:30:50] [SPEAKER_01]: right? I want the output. I don't want to take the time to do the testing. And it, first of all,

[00:30:56] [SPEAKER_01]: to answer your question, testing and validation in maintaining the integrity of AI systems is

[00:31:03] [SPEAKER_01]: critical. How we do that, the things we need to test for is broader than traditional security

[00:31:09] [SPEAKER_01]: practices, right? The risks and the value that AI systems introduce is broader than just a security

[00:31:20] [SPEAKER_01]: problem. It's an entire organization problem. Bias is not a security problem to fix, right?

[00:31:26] [SPEAKER_01]: It creates security problems and creates security risks, but it's bigger than that and not a problem

[00:31:33] [SPEAKER_01]: that security alone can address. Testing, critical. In some sectors, in regulations that are already

[00:31:46] [SPEAKER_01]: starting to mandate use of AI against personal data in certain industries, certain testing is already

[00:31:52] [SPEAKER_01]: mandated. Stress testing your models, evaluating for drift in order to, in particular, to mitigate bias,

[00:32:00] [SPEAKER_01]: to not perpetuate bias by once something has been identified, return an error in your output instead of

[00:32:07] [SPEAKER_01]: giving the biased or faulty answer. Things like that can help improve the integrity of the output that we

[00:32:16] [SPEAKER_01]: receive from AI systems. But having that, and we've seen that in the commercial models, the commercial

[00:32:24] [SPEAKER_01]: platforms that are out there, where the answers that we got from certain inputs and prompts that we got

[00:32:32] [SPEAKER_01]: three weeks ago or four months ago or a year ago from some of these systems are different today

[00:32:37] [SPEAKER_01]: because they've been retrained. They've had new information be introduced into them. And so being

[00:32:43] [SPEAKER_01]: able to start to recognize those things is critical as an output of the testing that we do over the

[00:32:50] [SPEAKER_01]: life cycle of the system.

[00:32:53] [SPEAKER_00]: 100% with you. And as you said, it's not one and done. So looking ahead, I mean, how do you see the

[00:32:59] [SPEAKER_00]: field of AI governance evolving, especially in terms of ensuring that security and ethical considerations

[00:33:05] [SPEAKER_00]: are embedded into the AI development from the ground up because we are literally months away from 2025 now?

[00:33:12] [SPEAKER_00]: I'm curious how you see this evolving in the months ahead.

[00:33:16] [SPEAKER_01]: Yeah. First of all, this year has flown. I can't believe it's already October and almost 2025.

[00:33:22] [SPEAKER_01]: But when I first got into the chemical regulatory space, it was just at the moment where the European

[00:33:31] [SPEAKER_01]: Union passed their classification labeling and packaging regulation. So for anyone in your audience

[00:33:37] [SPEAKER_01]: that's familiar, right, the CLP regulation was based on UN recommendations on how hazards are classified

[00:33:44] [SPEAKER_01]: and communicated to individuals so that there's more consistency. And when you see a certain symbol or you

[00:33:52] [SPEAKER_01]: see a warning message on a package that it means the same level of care should be taken, whether you're

[00:34:01] [SPEAKER_01]: in the European Union or in the United States or anywhere else in the world coming from the UN.

[00:34:06] [SPEAKER_01]: And so the EU was kind of first. And then we saw other countries around the world following suit,

[00:34:12] [SPEAKER_01]: revising and updating their legislation to align. And we're at that same moment for AI

[00:34:18] [SPEAKER_01]: where the EU AI Act passed earlier this year, came into effect earlier this year. And we have started

[00:34:26] [SPEAKER_01]: to see highly regulated sectors first in the U.S. at least, in the insurance industry, the utility

[00:34:33] [SPEAKER_01]: industry and in government, there are already regulations about AI use. And AI use is applied

[00:34:40] [SPEAKER_01]: especially to personal information from a privacy perspective. But starting to see those more

[00:34:46] [SPEAKER_01]: broad national level things just take time from a rulemaking process, but starting to see broader

[00:34:53] [SPEAKER_01]: efforts from a regulatory perspective. Sooner, like I said, those things take time, we will continue to see

[00:35:03] [SPEAKER_01]: industry guidance evolve. Already, there are hundreds of frameworks out there on how to govern AI.

[00:35:14] [SPEAKER_01]: And we will see that guidance evolve, mature, continue in the interim over the next several months and years.

[00:35:23] [SPEAKER_01]: And we will see also organizations acting independently, putting forth their own statements around AI use,

[00:35:34] [SPEAKER_01]: transparency, raising the bar for themselves. But what that does also is raises the bar for their

[00:35:39] [SPEAKER_01]: competitors to keep pace in the way that their customers perceive them in the areas of AI governance,

[00:35:50] [SPEAKER_01]: driven by AI governance in the areas of transparency and responsible use of AI. So kind of three approaches

[00:35:56] [SPEAKER_01]: there from regulatory industry and individual organizational perspective. I think that individuals

[00:36:03] [SPEAKER_01]: create pressure with feedback on the acceptance or pushback around use of AI. And as we continue to

[00:36:13] [SPEAKER_01]: learn, as all of us continue to learn, it can only be better, like things can only be made better as our

[00:36:23] [SPEAKER_01]: brains, I think, kind of catch up to the value and opportunity that AI brings, but also the risk,

[00:36:32] [SPEAKER_01]: including bias, that is present in front of us as well. So I think we'll see things evolving on

[00:36:38] [SPEAKER_01]: multiple fronts here over the next months and years.

[00:36:41] [SPEAKER_00]: So much food for thought from your insights today. I think that's a powerful moment to finish on. But

[00:36:47] [SPEAKER_00]: before I do let you go, I'm conscious we've been incredibly forward focused today talking about AI

[00:36:52] [SPEAKER_00]: and indeed your insights. And none of us are able to achieve any degree of success without a little

[00:36:58] [SPEAKER_00]: help along the way though. So I've got to ask, is there a particular person that maybe you're grateful

[00:37:03] [SPEAKER_00]: towards that helped you get you where you are, maybe saw something in you, invested some time in you

[00:37:08] [SPEAKER_00]: that we can give a little shout out to today? Who would that person be and what?

[00:37:13] [SPEAKER_01]: Well, I think too many to name. If I was going to name one person who has kind of encouraged me to be

[00:37:21] [SPEAKER_01]: my best self, that person would be my grandma. And my family knows, my family knows, right?

[00:37:28] [SPEAKER_01]: And just very encouraging for me to do my best. And whether that day was a success or a failure,

[00:37:36] [SPEAKER_01]: didn't matter, didn't matter. Grandma's love, right? But from a professional perspective,

[00:37:42] [SPEAKER_01]: for anyone listening, I hope you have, let me speak directly to you for a moment. I hope you

[00:37:48] [SPEAKER_01]: have a mentor, at least one, whether it is a formal relationship or an informal relationship

[00:37:54] [SPEAKER_01]: that from a professional perspective is firmly in your corner. My mentors know who they are. Thank

[00:38:00] [SPEAKER_01]: you. And sometimes you outgrow your mentor. That's okay. Right? And those relationships need to grow

[00:38:08] [SPEAKER_01]: with you over the course of your career. I have been very lucky to work for amazing people, to work

[00:38:16] [SPEAKER_01]: with amazing people smarter than me, even if there have been moments in my career that I haven't listened

[00:38:23] [SPEAKER_01]: or learned from them. Sometimes I need to learn things the hard way for myself. And so apologies

[00:38:30] [SPEAKER_01]: for that. But the team I'm working with at Optive right now, especially if I can mention some of them

[00:38:36] [SPEAKER_01]: by name, Randy Larrier, Brian Golembeck, Helvin Walker, Tiffany Sjogren, and Derek Schwent, we are doing

[00:38:43] [SPEAKER_01]: great things. And I am honored to be working alongside you. And I won't mention them by name, but my family,

[00:38:49] [SPEAKER_01]: my husband, who encourage me and have to put up with me every single day. I have to thank them as well.

[00:38:56] [SPEAKER_00]: Awesome. Well, a great answer. And a big shout out to everyone you mentioned. I think it's so important

[00:39:01] [SPEAKER_00]: to recognize the people whose shoulders we're standing on, they lift us up. So big thank you for sharing

[00:39:08] [SPEAKER_00]: that today. And for anyone listening, wanting to connect with you, find out more information about Octave,

[00:39:13] [SPEAKER_00]: that maybe we should have spent a little bit more time talking about as well. But anyone wanting to check

[00:39:17] [SPEAKER_00]: any of that out, where would you like to point everyone? The easiest thing, you can email us at

[00:39:23] [SPEAKER_01]: ai.optive.com or visit optive.com forward slash AI. Awesome. Well, we covered so much in our conversation

[00:39:34] [SPEAKER_00]: today from factors contributing to bias in AI, including stereotyping, confirmation bias,

[00:39:41] [SPEAKER_00]: labeling bias, representation bias, cultural bias. There's a long list there. And I think

[00:39:46] [SPEAKER_00]: just talking about it in open and talking about retaining diversity of thought in model development

[00:39:52] [SPEAKER_00]: and AI governance processes is so important right now. There's a lot of businesses going

[00:39:58] [SPEAKER_00]: straight in, not wanting to get left behind. So this piece of work is so important. And also,

[00:40:03] [SPEAKER_00]: we're talking about the technology a lot, but implementing a human in the loop governance model,

[00:40:09] [SPEAKER_00]: that's where the magic happens too. But thank you for shining a light on this today. Really

[00:40:13] [SPEAKER_00]: appreciate your time. Thanks for having me, Neil. And that brings us to the end of today's episode

[00:40:19] [SPEAKER_00]: with Jennifer Mahoney from Optive Security. I think it's clear that the conversation around AI bias,

[00:40:25] [SPEAKER_00]: it needs to evolve alongside the technology itself. From understanding the origins of bias to implementing

[00:40:33] [SPEAKER_00]: those strong governance frameworks, Jennifer provided valuable insights today into exactly how

[00:40:39] [SPEAKER_00]: how organizations can approach AI development with both security and fairness in mind.

[00:40:47] [SPEAKER_00]: And as we move forward in this AI-driven era, how do you think businesses should prioritize ethical AI

[00:40:55] [SPEAKER_00]: governance? Should there be more regulation? Or should companies lead the way with their own

[00:41:01] [SPEAKER_00]: frameworks? Something that's impacting every business out there right now. So I'd love to hear your thoughts

[00:41:07] [SPEAKER_00]: on this critical issue. Until next time, let's keep pushing the boundaries of technology while ensuring

[00:41:13] [SPEAKER_00]: the fairness and security remain at the forefront. As always, email me techblogwriteroutlook.com,

[00:41:21] [SPEAKER_00]: x LinkedIn, Instagram, just at Neil CQs. Love to hear your thoughts on this one very popular topic.

[00:41:27] [SPEAKER_00]: Other than that, I'll return again tomorrow with a different subject and another guest from a different

[00:41:33] [SPEAKER_00]: industry. But thank you for listening today. And I will speak with you all again bright and early

[00:41:38] [SPEAKER_00]: tomorrow. Bye for now.