3055: Exploring AI Innovation and Ethics at Adobe MAX
Tech Talks DailyOctober 14, 2024
3055
28:3622.91 MB

3055: Exploring AI Innovation and Ethics at Adobe MAX

What does it take to ensure that cutting-edge AI innovations are ethically sound? Today, live from Adobe MAX, I sit down with Grace Yee, Senior Director of Ethical Innovation at Adobe, to explore the intersection of ethics and AI in real-time.

As Adobe unveils its latest advancements at the event, Grace shares how the company's ethical AI program has evolved to meet the challenges posed by generative AI, including Adobe Firefly.

Grace has been spearheading Adobe's AI ethics program since its inception, and she offers exclusive insights into the global AI ethics governance structure she leads. This framework ensures that every AI feature Adobe develops—whether for creative tools or enterprise solutions—adheres to the company's principles of accountability, responsibility, and transparency. She walks us through the ethics review process, explaining how Adobe evaluates AI products for risks like bias and stereotypes, and collaborates with product teams to ensure innovations align with ethical standards before launch.

Grace highlights the behind-the-scenes work her team does to support new products. From collaborating with native speakers on multilingual support for Adobe Firefly to running risk discovery exercises for high-risk features, her team ensures that innovation is always balanced with responsibility.

As AI becomes more integrated into everyday experiences, how can we continue to innovate while keeping ethics at the forefront? Grace's perspective offers a window into how Adobe tackles this challenge head-on.

This week Adobe MAX will explore the future of AI, the role of ethics in tech innovation, and what it means for companies and creators alike. How do you think ethical frameworks will shape the next wave of AI development? Tune in and share your thoughts!

[00:00:04] Welcome back to The Tech Talks Daily Podcast. A lot of excitement in the air today at the Adobe MAX Creativity Conference, where creatives are exploring the latest trends and tools and learning from each other in the spirit of inspiration and collaboration. That's taking place at the Miami Beach Convention Center.

[00:00:24] And while the world is getting excited about that, talking about artificial intelligence and its impact on not just the Adobe Creative Suite, but in just about everything that we do.

[00:00:35] I have a quick question for you all. How is your company ensuring that your AI technologies are developed responsibly and ethically, especially in this fast evolving world of generative AI?

[00:00:48] Well, today I'm very fortunate because I'm going to be joined by Grace Yee. She is the Senior Director of Ethical Innovation at Adobe, and she's going to be shedding light on this critical question.

[00:01:00] And Grace has been at the forefront of building Adobe's AI ethics program, guiding the company's commitment to accountability, responsibility and transparency throughout the development and deployment of Adobe's AI features, including Adobe's generative AI model, Firefly.

[00:01:19] So in today's episode, we're going to explore the journey of building that ethical innovation team, how the program has evolved to meet the challenges of AI innovation.

[00:01:28] I'm also going to ask my guests to share insights into Adobe's high risk, low risk approach, balancing the excitement of new technologies with the need to protect users from potential harms.

[00:01:40] And as we kick off the Adobe Max conference, I also want to try and get a behind the scenes look at the collaborative efforts between the ethical innovation team and product teams and how they ensure AI features are ready for launch.

[00:01:54] So if you've ever wondered how Adobe maintains its ethical standards in a rapidly advancing field, you're going to love this one.

[00:02:02] But enough scenes setting from me. Let's dive in straight away and get Grace onto the podcast now.

[00:02:08] So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do?

[00:02:16] So my name is Grace Yee and I lead ethical innovation here at Adobe.

[00:02:22] And I guess before I kind of start into that, I think this is my 20th year at Adobe, which feels strange to say.

[00:02:30] And I think it's probably unusual for someone to be at one company for such a long time.

[00:02:36] But I have to say I've had an amazing amount of opportunity here at Adobe to work in a variety of different roles and organizations, including on our AI technology, Adobe Sensei.

[00:02:50] And through my journey, I am now, as I said, leading the ethical innovation team here at Adobe.

[00:02:56] And my team manages the review processes for accessibility, as well as AI ethics to mitigate against harm and bias, as well as increase usability in our products for everyone.

[00:03:08] Obviously, we don't do this alone.

[00:03:10] We collaborate with amazing teams across marketing and product and legal and design to ensure that our AI technologies align with our AI ethics principles.

[00:03:22] Well, it's a real pleasure to have you on the podcast.

[00:03:25] We must have been in the same room at some point because I've got something like six or seven Adobe summits under my belt from Vegas and seeing everything evolve in the Adobe Creative Suite from that focus on experiences a long time ago to going into artificial intelligence.

[00:03:44] And of course, we've all seen AI evolve.

[00:03:47] And we talk a lot about it being a co-pilot and augmenting human creativity rather than replacing it.

[00:03:54] But what we don't always talk about enough is the AI ethics around that.

[00:04:00] And can you share your journey of building Adobe's AI ethics program right from its inception and how the team's charter has also evolved from the rise of generative AI to technologies like Sensei, Firefly, etc.?

[00:04:15] So we started on this journey about five years ago.

[00:04:19] And obviously, at that time, generative AI wasn't even on the horizon.

[00:04:23] But we knew that we wanted to create something that could grow and change with our innovations.

[00:04:30] And I think even before we started working on the AI ethics principles, which serve as the foundation for our whole entire AI ethics governance process and program,

[00:04:43] we said that we wanted our principles to be simple, to be concise, and then most of all, like practical and relevant.

[00:04:51] And so that was kind of like our high-level mission before we even went into the principles themselves.

[00:04:58] But what we did do was because we wanted them to be simple, concise, practical and relevant,

[00:05:04] we pulled together a cross-functional team of Adobe employees with diverse gender, racial and professional backgrounds.

[00:05:12] We included people from our research teams, our product teams, our legal teams, our marketing teams, and together this group of individuals.

[00:05:20] And we just have a simple three principles that we abide by when we develop and deploy our AI features.

[00:05:28] It's accountability, responsibility, and transparency.

[00:05:32] And accountability simply means that we have mechanisms in place to receive and respond to feedback or concerns.

[00:05:45] Responsibility means that we have systems and processes in place to ensure that our innovations undergo thoughtful evaluation and careful due diligence.

[00:05:55] And then transparency means that we're open about how we use AI and we want to make sure we bring our customers along in that journey.

[00:06:05] So these simple and very practical principles allowed us to ensure that we're evaluating how AI technology impacts humans.

[00:06:17] And from there, we were able to build processes in place for training, testing, and reviewing products.

[00:06:23] This is the framework that we use when we were building, I guess what people are calling classical AI.

[00:06:31] And it's actually the same framework that we use to build generative AI products.

[00:06:37] And so our goal is really simple, but it's really important because we want to make sure that our AI enhances the human experience.

[00:06:46] We prioritize mitigating harmful bias, promoting inclusivity,

[00:06:50] and we're also expanding our focus to ensure that accessibility testing is happening so that we are testing and mitigating issues before release.

[00:07:02] By doing this, we're making sure that our products truly reflect creativity for all and aligning our innovations with our mission to serve everyone.

[00:07:13] And you mentioned there, the three practical principles, accountability, responsibility, and transparency.

[00:07:20] It's so important when dealing with AI.

[00:07:23] And if we zoom out for a moment, how do you at Adobe ensure that your AI products, particularly those involving generative AI,

[00:07:31] how do you ensure that they align with those principles?

[00:07:35] So we use those principles to develop our AI ethics review process.

[00:07:41] And all new AI features before they go to market to our customers go through this review process.

[00:07:49] The first thing that happens in the review process is that product teams come and they fill out an AI ethics assessment,

[00:07:57] or I think industry calls it an AI impact assessment to ensure that our AI technologies are developed and employed in a responsible and ethical manner,

[00:08:07] and also taking into account the risks and impacts to our users.

[00:08:11] So it's a multi-part assessment that's really designed to identify features and products that can perpetuate harmful bias and stereotypes.

[00:08:20] And so it's a high-risk, low-risk approach that we use so that we can focus our team, the product teams,

[00:08:30] on those features that have the highest ethical impact to our customers without slowing down the pace of innovation.

[00:08:38] And so that has really been the core of our mission to ensure that our technology meets the needs of our creators.

[00:08:45] And while mitigating the harmful bias that can come from the use of AI technologies.

[00:08:54] And on behalf of every business leader listening around the world that would like to follow your lead when it comes to ethics in AI,

[00:09:03] I'm curious, in your role as Senior Director of Ethical Innovation,

[00:09:07] what are some of the key challenges that you've had to face in establishing a global AI ethics governance structure?

[00:09:15] And how have you addressed some of those challenges too?

[00:09:18] It must be a question you've got a lot, and I'm sure it hasn't all been plain sailing.

[00:09:22] Is there anything you can share around there?

[00:09:24] Yes.

[00:09:26] It's been challenging, but it's also been so interesting as well.

[00:09:31] And it's really about scaling our process to meet the pace of innovation that's happening at Adobe.

[00:09:38] And so I think when people think about governance, it's not just about approving or not approving a feature.

[00:09:44] It's really about working with the product teams, working alongside with the product teams,

[00:09:51] to really understand how the AI feature works, what is it intended to do?

[00:09:57] And then also the guardrails that are being developed.

[00:10:02] This really helps us scale the process and allow innovation to flourish because it allows us to adapt the assessment that I talked about in the previous question,

[00:10:13] and then the process so that we understand the next new novel innovation that's coming from our research and product teams.

[00:10:24] This partnership that we have also encourages every product team to think about how they can mitigate harm and bias as they begin product development.

[00:10:36] And then now we have product teams coming to us before they've even started the assessment process to talk about the risks and the solutions as they are starting the development process.

[00:10:50] And this proactive approach mitigates potential harm before they affect our users or their communities.

[00:11:00] And it also helps us with addressing the scaling issues.

[00:11:04] And of course, Adobe is an incredibly large enterprise with a global audience.

[00:11:09] What are the specific processes or maybe even tools that you've developed at Adobe that help you maintain ethical standards in AI development?

[00:11:20] And how do those resources support teams right across the company that are working all around the world?

[00:11:27] It really goes back to our ethics assessment that we have.

[00:11:32] It gives teams a guide about what our ethical standards are as they develop the AI-powered features and tools.

[00:11:43] And it's an iterative process.

[00:11:46] You know, the assessment that we put together five years ago looks very different than the assessment that we have today.

[00:11:52] Because of this close collaboration we have with the product teams and the learnings that we're getting, we're evolving the assessment, we're evolving the review process to meet the needs of the teams, to meet the pace of innovation, but doing it responsibly for our customers.

[00:12:11] This also ensures that the evaluation, the process, the assessment, it remains relevant and effective as we look at emerging ethical concerns.

[00:12:24] And this goes back to the mission that we set forth at the beginning that we wanted our program and the principles to be practical, relevant, and simple.

[00:12:35] Adobe is known as a company that helps creative teams collaborate seamlessly.

[00:12:40] But I'm curious, if we look behind the curtain, how does your team collaborate with your product teams to ensure that AI-driven products are indeed ethically sound, ready for launch, especially in such a fast-paced environment leading up to events?

[00:12:55] I mean, as we're recording this today, like Adobe Max, for example.

[00:12:59] Yes. So it really starts with this process that we call a risk discovery.

[00:13:06] So we sit down with the product teams to understand the AI feature, understand what it's intended to do, and then we talk to the product teams about the potential impacts.

[00:13:17] We think about it in terms of perpetuating harmful stereotypes or generating hate content or inadvertently causing harm.

[00:13:25] And so this risk discovery exercise that we go through with the product teams will help inform what kind of testing needs to be done, like what should the feedback mechanisms look like?

[00:13:40] And then we start to think about putting together a test design for that particular feature.

[00:13:46] We help them think about what are the areas they should be focusing on.

[00:13:51] One, we sometimes may suggest mitigations.

[00:13:55] And then we also think about how do you measure the prevalence of that risk so that we can ensure that it meets our internal benchmarks before it gets released.

[00:14:06] And so this is a really important process that we go through with the product team before the feature is deployed into production.

[00:14:12] But we also recognize that we're never going to get to this place where it's like zero harm.

[00:14:18] And what we want to do is that's where the feedback mechanism that we have in our products is so important.

[00:14:25] We really need the feedback from our users and our community as they're using our AI features to come back to us so that we understand what they're seeing as they're using the features.

[00:14:39] We're taking those learnings and then we're also feeding those learnings into the features to make them better over time.

[00:14:48] And of course, what we're talking about here, the pace of technological change impacts everyone from creatives, techies, leadership teams and the boardroom suite as well.

[00:14:58] So can you tell me a bit more about the importance of training and ongoing education in AI ethics and maybe as well how Adobe ensures that all employees involved in AI development are all aligned to that your company's ethical standards there?

[00:15:15] Because it needs everybody involved here, right from the ground up.

[00:15:19] I completely agree.

[00:15:21] It is one of those things where everybody needs to be involved and everybody can make an impact.

[00:15:29] And it goes back to I'm really a practical person.

[00:15:33] And so everything I do is I try to make it as practical as possible.

[00:15:36] And I talked about the risk discovery exercise.

[00:15:40] And that's really that's the practical education that we provide to the practice teams.

[00:15:44] So we can sit down and talk through these are the potential areas of concern they have to think about for their particular feature.

[00:15:55] And it's different for every single feature.

[00:15:57] So I think that's where the work is nuanced and challenging.

[00:16:02] And I think all product teams would love to have a checklist and say, if I complete this checklist, then I'll be able to develop and deploy a safe AI feature.

[00:16:12] But unfortunately, when it comes to harmful bias, there is no checklist.

[00:16:19] And so that's why we believe that having a diverse perspective is really important.

[00:16:27] And that's why we believe that our diverse employees are an important asset to how we can improve our products before they're released to our customers.

[00:16:38] We at Adobe, we use employee betas.

[00:16:42] And we also have conversations with teams in different regions to help make our products better before they're released to our customers.

[00:16:52] So in addition to working closely with the product teams, my team also works closely with our trust and safety team, our legal team, our product equity team, as well as our internationalization teams to help account for possible issues, helping them to monitor feedback and develop mitigations.

[00:17:14] We bring them into the process so they can see how important it is firsthand to understand why we have these standards in place.

[00:17:21] For example, last summer, Adobe Firefly added multilingual support for our users globally.

[00:17:29] And we worked really closely with our internationalization team to make sure that native speakers were involved in that process so that they could help expand our terminology to cover country-specific terms and connotations.

[00:17:44] And a few years ago, generative AI blindsided so many businesses.

[00:17:49] Then we saw the solutions.

[00:17:50] Then we saw the somewhat of fear or cautiousness around it and how we're going to manage our business data around this.

[00:17:58] And I think we've overcome a lot of those hurdles.

[00:18:00] Now we're starting to go through implementation phase and getting used to using it.

[00:18:05] But as generative AI continues to evolve in workplaces around the world, are there any future trends that you anticipate in the field of AI ethics?

[00:18:14] And how are you at Adobe preparing to navigate some of these changes?

[00:18:19] I think AI is going to grow and become increasingly integrated in everyday experiences.

[00:18:27] Like we're already seeing that happen.

[00:18:28] And I think I'm excited to have that happen so there's more time for human creativity.

[00:18:36] But I think as AI becomes more and more ingrained in our daily life, the AI ethics and this high-risk, low-risk framework that we have becomes even more critical.

[00:18:49] Because we really need to go back to when we look at an AI technology or an AI feature, really think about like how are humans using this particular feature or product?

[00:19:01] How is it benefiting them?

[00:19:04] But then also think about like what are the risks?

[00:19:06] Like we have to think about it both from an intentional, like there are bad actors out there who are going to intentionally try to use the AI feature and generate harmful content.

[00:19:17] But we also want to make sure that from an unintentional perspective, if someone types in some things or very safe, we don't want the AI to generate something harmful.

[00:19:30] I think there's a lot of pressure on every business around the world at the moment to stay up to speed with this and get that balance right.

[00:19:37] But I would imagine there's almost an additional pressure on Adobe because you are one of the most innovative and creative companies in the entire world.

[00:19:46] A lot of people looking at you to lead the way.

[00:19:49] So how do you at Adobe balance innovation with ethical considerations, particularly when launching new AI features?

[00:19:56] And what kind of role do you see your team playing in shaping that future of AI at Adobe?

[00:20:01] There's a lot of pressure, I would imagine, right?

[00:20:03] Yeah, there is a lot of pressure.

[00:20:06] And it is really challenging to balance innovation with ethical considerations.

[00:20:12] And also it's the pace, right?

[00:20:14] It's changing so fast and there's so many amazing innovations coming from everywhere.

[00:20:22] So I think when it goes back to our framework and it goes back to people.

[00:20:29] So when we look at any feature, we always think about the people, like who's using it?

[00:20:36] How is it using it?

[00:20:38] How are they using it?

[00:20:39] What are the potential outputs of that feature?

[00:20:43] We also think about it, like I said, in terms of are they using it intentionally to do something harmful?

[00:20:51] Are they, is it, is it something where the model or the feature unintentionally generates something harmful?

[00:20:57] And so again, we start with our high risk, low risk approach first to kind of balance the innovation with the ethical considerations.

[00:21:08] So for example, if we have a low risk feature, there is a feature in Adobe Express, which will look at a document template and it will recommend a set of fonts.

[00:21:19] So when we look at that, the risk for that feature is minimal.

[00:21:23] So we'll tailor our review process and our evaluation accordingly.

[00:21:29] But for a high risk product or feature like Adobe Firefly, which is a text to image feature,

[00:21:37] we know that that particular feature needs to go through our more rigorous evaluation process to ensure that there are guardrails in place to prevent it from generating anything harmful.

[00:21:53] But I think it's also about understanding the feature, how the feature is using the particular technology,

[00:22:00] how the technology is put together and, and what guardrails are in place.

[00:22:06] And also, for example, like if we have a feature that's using Adobe Firefly to power its technology and they have all the guardrails in place,

[00:22:15] then we couldn't prove it without a rigorous review because we understand how Adobe Firefly is built,

[00:22:22] what are the guardrails, what are the guardrails mitigating against.

[00:22:26] But if another feature is using Adobe Firefly with all of its guardrails in place,

[00:22:32] but then it adds in maybe another model to allow it to accept shapes like a triangle, a circle, a rectangle as inputs,

[00:22:43] and then it remixes the output to generate 3D objects,

[00:22:47] then maybe we need to take a closer look because that changes how that particular feature is using Adobe Firefly.

[00:22:55] And I'm so glad you mentioned that it's all about going back to people.

[00:23:00] And I'm very conscious that in our conversation today, we've focused heavily on AI, the technology,

[00:23:04] and looking towards the future and how this is going to evolve.

[00:23:08] But if I ask you to go back in time for a moment,

[00:23:11] because I think none of us are able to achieve any degree of success without a little help along the way.

[00:23:16] So I'm curious, is there a particular person that you're grateful towards,

[00:23:20] who maybe helped you get you where you are, saw something in you, invested a little time in you,

[00:23:24] that we can bring that back to people and give them a little shout out today?

[00:23:28] Who would that be and what?

[00:23:30] You know, I think there's a saying that says,

[00:23:32] that says it takes a village to raise a child.

[00:23:35] So I can say it took a village to get me to where I am today.

[00:23:39] Adobe, I've had some amazing opportunities to be mentored and sponsored by great Adobe leaders,

[00:23:48] many of them who have since retired from Adobe and many of them who are still here.

[00:23:53] But in addition to these great Adobe leaders,

[00:23:57] I also need to give credit to my parents who started the village.

[00:24:01] My mother taught me about how important it is to live a life of experiences, to try new things.

[00:24:08] And my father taught me about the importance of trust.

[00:24:12] And the one thing that you have to know about my father is that trust is the most important thing to him.

[00:24:20] Where my father, trust is really about reliability, honesty, and integrity.

[00:24:25] And if you violate that trust with him, like you will never get it back from him.

[00:24:29] So that was instilled in me as a child.

[00:24:34] And it really is a big part of who I am today.

[00:24:38] And so for me, like Adobe has been this amazing combination of having this opportunity to work with so many,

[00:24:48] in so many different organizations and so many different roles.

[00:24:51] So that's where the leading a life of experiences and trying new things.

[00:24:56] But then it's also this company where trust is at the center of everything we do.

[00:25:02] And so I think it's those two things that have really gotten me where I am today.

[00:25:09] Wow, what an incredible story.

[00:25:11] I absolutely love that.

[00:25:12] Incredibly powerful.

[00:25:13] But anyone listening that just wants to find out more information about anything we talked about today,

[00:25:19] AI ethics is a huge topic with business leaders listening around the world right now.

[00:25:24] So anywhere in particular you would like to point people listening to find out more information,

[00:25:28] speak with your team, or just dig a little bit deeper on the topic,

[00:25:32] where would you like to point them?

[00:25:33] We have an amazing AI ethics page on Adobe.com.

[00:25:38] So I would say start there.

[00:25:40] There's a lot of information about what we do and how we do it.

[00:25:45] So it's a great place to get some resources.

[00:25:50] Excellent.

[00:25:51] Well, I'll have links to everything so people can find that nice and easily.

[00:25:55] And so much I loved about your conversation today

[00:25:58] and listening to your insights from spearheading the ethics program since its inception

[00:26:04] and discussing the process of building the ethical innovation team,

[00:26:08] including how your team's charter has evolved since the age of generative AI.

[00:26:13] And also those three core principles there, accountability, responsibility, and transparency.

[00:26:19] But I think it was just a perfect moment to end on there

[00:26:22] and how your foundations of your origin story and what put you on this path

[00:26:26] was the trust that your dad taught you there,

[00:26:29] or the lesson in trust that your dad taught you.

[00:26:32] Incredibly powerful moment to end on,

[00:26:33] but just thank you for sharing your story with me today.

[00:26:36] Thank you for taking the time, Neil.

[00:26:40] And that brings us to the end of our conversation with Grace Yee from Adobe.

[00:26:44] For me, it's been fascinating to learn about the thoughtful,

[00:26:48] deliberate processes behind Adobe's AI ethics program.

[00:26:51] We hear a lot about the shiny tools.

[00:26:53] And for me, it was reassuring to take a look under the hood

[00:26:57] and learn about how the company is ensuring its generative AI models

[00:27:01] live up to those high standards of responsibility and transparency.

[00:27:05] And Grace's insights into the importance of diverse perspectives,

[00:27:09] ongoing reviews, and close collaboration with product teams,

[00:27:13] I think it gives us all a glimpse into the rigorous effort that it takes

[00:27:16] to create an AI that benefits everyone.

[00:27:20] And as we prepare for life in 2025,

[00:27:24] AI is going to continue to evolve.

[00:27:26] We know this for certain.

[00:27:27] But how do you see the role of ethics in shaping its future?

[00:27:31] Should companies be doing more to ensure that AI technologies are developed responsibly?

[00:27:37] Or is regulation the key?

[00:27:39] I'd love to hear your thoughts on this.

[00:27:42] Please email me, techblogwriteroutlook.com.

[00:27:45] You can connect with me on LinkedIn, Twitter, Instagram, just at Neil C. Hughes.

[00:27:49] Nice and easy to find there.

[00:27:51] I'd love to continue this conversation with each and every one of you.

[00:27:54] So let me know.

[00:27:55] But until next time, keep exploring how technology and ethics intersect in the world of innovation.

[00:28:02] If you don't subscribe already, please, wherever you listen to the podcast, hit that subscribe button.

[00:28:08] I've got a different guest every single day where we explore a different industry

[00:28:12] and the impacts of technology on it, whether that be revolutionary or evolutionary.

[00:28:17] And yes, you are all cordially invited to join me tomorrow.

[00:28:20] So thank you for listening today.

[00:28:23] And hopefully I will speak with you all bright and early tomorrow.

[00:28:26] Bye for now.