What does responsible AI really look like when it moves beyond policy papers and starts shaping who gets to build, create, and lead in the next phase of the digital economy?
In this conversation recorded during AWS re:Invent, I'm joined by Diya Wynn, Principal for Responsible AI and Global AI Public Policy at Amazon Web Services. With more than 25 years of experience spanning the internet, e-commerce, mobile, cloud, and artificial intelligence, Diya brings a grounded and deeply human perspective to a topic that is often reduced to technical debates or regulatory headlines.
Our discussion centers on trust as the real foundation for AI adoption. Diya explains why responsible AI is not about slowing innovation, but about making sure innovation reaches more people in meaningful ways. We talk about how standards and legislation can shape better outcomes when they are informed by real-world capabilities, and why education and skills development will matter just as much as model performance in the years ahead.

We also explore how generative AI is changing access for underrepresented founders and creators. Drawing on examples from AWS programs, including work with accelerators, community organizations, and educational partners, Diya shares how tools like Amazon Bedrock and Amazon Q are lowering technical barriers so ideas can move faster from concept to execution.
The conversation touches on why access without trust falls short, and why transparency, fairness, and diverse perspectives have to be part of how AI systems are designed and deployed.
There's an honest look at the tension many leaders feel right now. AI promises efficiency and scale, but it also raises valid concerns around bias, accountability, and long-term impact. Diya doesn't shy away from those concerns. Instead, she explains how responsible AI practices inside AWS aim to address them through testing, documentation, and people-centered design, while still giving organizations the confidence to move forward.
This episode is as much about the future of work and opportunity as it is about technology. It asks who gets to participate, who gets to benefit, and how today's decisions will shape tomorrow's innovation economy. As generative AI becomes part of everyday business life, how do we make sure responsibility, access, and trust grow alongside it, and what role do we each play in shaping that future?
Useful Links
Tech Talks Daily is sponsored by Denodo
[00:00:04] Have you ever wondered what responsible AI looks like? We've all heard the buzzword. But when you move past the slogans and start dealing with real-world risks, policies and shifting expectations, things are often quite complex. And these are a few things that I was pondering as I sat down with someone who has helped shape how governments and global enterprises
[00:00:27] think about the guardrails that will define this next chapter. So in a year where AI has moved from theory to boardroom priority, I think the conversation today will hopefully offer a rare look at the working that's happening behind the scenes to make sure innovation stays safe and accessible. And my guest today, she's a responsible AI lead in public policy at AWS, and she spends her
[00:00:54] time working with executives, engineers and policy makers all around the world and helps them understand where AI presents opportunities, but equally where caution is required. And her work spans early machine learning practices, the rise of generative AI, and indeed the growing pressure for standards that balance innovation with protection. And she also brings a human lens to this field,
[00:01:23] emphasizing education, inclusion, and the importance of access for entrepreneurs and indeed entire communities that have often been sidelined in previous waves of technology. But one of the things that struck me in our conversation today is just how responsible AI has matured. It's no longer an afterthought or a marketing term, it's becoming a foundation of trust,
[00:01:48] a practical framework for design and a way for organizations to build confidence in systems that influence our daily work. So how do you turn values into workflows or transform policy into something that engineers can actually use? Before we go into today's episode, I just want to give a quick shout out to my good friends at Denodo. Having spent five days at the AWS reInvent conference,
[00:02:17] one of my biggest takeaways is that AI is moving fast, but success remains elusive. Now most projects fail not because of AI, but because the data foundation isn't ready. And that is one of the many reasons why I joined up with my good friends at Denodo. Because Denodo's logical data management platform, combined with AWS's
[00:02:41] cloud capabilities, teams can unlock AI-ready governed data without unnecessary replication. So what does that mean? It means you can optimize your lake house, accelerate agentic AI and build data products that make self-service real. So together, Denodo and AWS help organizations turn AI projects from hype into measurable outcomes.
[00:03:06] Ultimately, faster, smarter, and with full governance. So discover how you and your team can benefit by going to denodo.com slash AWS. But now let's get today's guest on. So a massive warm welcome to the podcast. Can you tell everyone listening a little about who you are and what you do? Sure. Thank you so much for having me. I'm Dia Nguyen. I am a responsible AI lead in public policy here at AWS.
[00:03:35] And essentially, that means that I get to advocate for a safe and responsible use of artificial intelligence, both with our executives in engagement with them, but also with those that are creating legislation, both in policy here in the US and abroad. Well, it's a pleasure to have you join me. It's incredibly busy here, isn't it? I mean, you've been at
[00:04:02] reInvent all week, speaking with builders, policymakers and global companies. I'm curious, from all the conversations you've had, what stood out to you most about where Responsible AI is heading next? Because I bet a lot of people are coming to you and asking for your opinion on things. And maybe equally, you're trying to sound a few ideas yourself. But what kind of conversations you're having? It indeed is a busy week, but that's very much the nature of reInvent. It's a busy time for us.
[00:04:30] And it's a great time as well, because we get to share quite a bit about the services that we have, as well as the new things that are being released. One of the things that I've had an opportunity to talk quite a bit about is our new well-architected, responsible AI lens. And this was a recent release that helps to provide guidance to our customers. It gives them a framework to be able to unpack where
[00:04:59] there are potential areas of risk and also provides an assessment. There's a report that gets outputted, but there's also best practice guidance that helps them with addressing those areas of risk. And some of those point to actual services that we have as well. And it's been quite an interesting new release. It's not a product, like a service in the same way, but it's extremely useful to customers as they think
[00:05:28] about building to make real where risk can occur and the kinds of questions they need to ask to be able to unpack that. David And that's one of the reasons I was excited to speak with you today, because I'm an ex-IT guy and every tech conference I go to, everyone talks about responsible AI almost as an afterthought or a buzzword. But you've helped build that responsible AI practice inside AWS. So when I hear words like framework, you're talking my kind of language.
[00:05:56] So when you look back at the early days though, what problem were you trying to solve? And how has that mission evolved as AI has moved into the mainstream? Because you've probably seen so many big changes. Jennifer Well, one of the biggest things I'd say is that we were doing this before generative AI. So everyone is talking about generative AI. And of course, in certain environments, they talk about it like AI is brand new. Well, back in 2020,
[00:06:22] we were primarily talking about machine learning and computer vision. And at that time, we didn't have the NIST-RMF framework and a lot of these other things like ISO that have established standards. So we were essentially developing our own framework and our own approach to addressing those risks and engaging with our customers. And the other thing that was really different about
[00:06:46] that time was that we, in many cases, were working to convince them that they needed to pay attention to this. That there was an opportunity for them to uncover these areas of concern that they might not have explored and to think really strategically about how they would implement AI. Now, people understand that this is something we need to pay attention to. And we're much further along the journey,
[00:07:15] as well as a company where we have a number of services that help our customers get there. And policymakers right now, they're racing to understand the technology while businesses want more clarity on what good AI looks like. There's a lot of talk around black box AI, for example. You don't know what's going on inside. So how do you help these two groups meet somewhere in the middle so regulation can support innovation rather than slowing it down?
[00:07:41] Because it's always a tricky balance, isn't it? Well, one of the first things is just educating them on what the technology is and how to think about it. I use this perhaps as a simple way or oversimplification, but there really is truth in this, is that AI is a complex pattern matching or a very robust pattern matching or pattern identifying system that leverages a lot of data. And so when you
[00:08:08] understand that it's identifying patterns in data to help drive how it predicts or how it generates new content, well, some of the mystery around the technology disappears. And so just being able to simply explain the technology is a good way to start with helping policymakers understand we don't hire or we haven't elected policymakers because of their technical knowledge. So we actually have an
[00:08:37] opportunity to bridge the gap and helping them understand what's possible with technology and then where there can be considerations for legislation to protect our citizens and consumers. And another word that keeps coming up is trust. That's a word I hear a lot in enterprise AI conversations this year. So in your view, what does trust look like in practice when organizations
[00:09:06] start using generative models at scale inside an organization? Responsible AI is the foundation of trust. That's what I say to customers. I use that phrase a lot as well when I'm talking to policymakers. So it's having a set of tenants, a belief structure values around
[00:09:30] responsible AI and defining that as a strategy so that that governs the way that they design, you know, build, deploy and use the technology. And when they do that, that helps to engender trust. Trust because, you know, your customer knows that this was created in a way that's actually going to benefit them or that they get the desired outcome. And there aren't harms that result as
[00:09:55] or there aren't harms that come as a result of that. But that also helps to build trust in the technology. You know, people fear what they don't understand, or if they believe that it's not going to work well for them. But when they see that out of the product that, you know, our customers or we produce, then that's going to help build trust. And then that also removes some of the fear or concern that
[00:10:19] our legislators have as well, right? Because oftentimes they are legislating or trying to legislate in response to the fear. And so when we build responsibly, when we think about all of the stakeholders that should be considered or impacted by the technology, when we consider, you know, approaches to make sure that the responses are truthful or that we have robustness in our models or in the applications,
[00:10:49] that we're doing things to preserve privacy. Like those things matter and building trust. And that's really at the core of Responsible AI. And when doing a little research on you, I know another topic that you're passionate about and you speak a lot about is access. So how do you think generative AI is maybe changing the path for entrepreneurs who have historically been left out because they don't have the tools, they don't have the subscriptions or the networks to compete? Because it's more important than ever, isn't it? Dr. Jennifer Lange People can now create things. And that is,
[00:11:19] you know, one of the ways that the playing fields can be leveled, right? That there is bringing some equity in terms of access or creating, you know, the potential for access with the technology. And so I do think that it creates, it is removing barriers, right? And it can remove
[00:11:42] barriers to access and allowing people to be able to engage, engage with the technology, create where they may not have had resources to be able to do so, now have opportunity to be able to engage in a way ahead before. Dr. Justin Marchegiani Yeah, 100% with you. And from AWS standpoint here, you've been investing in programs with HBCUs, accelerators and community partners. I'm curious from what you're seeing,
[00:12:08] what kind of impact have you seen mentorship, funding and generative AI tools all coming together for founders? Have you seen anything there that it all comes together? Dr. Jennifer Lange Sure. I think, so you mentioned a few things that I'd really like to highlight because I think it is, you know, an area of strength that we've committed to at Amazon and AWS. You know,
[00:12:31] a part of the responsible AI strategy that I made reference to was, you know, our investment in people, like being people centered and this focus or in educating and equipping and partnership around responsible or responsible AI. So that is realized in the kind of programs that you described,
[00:12:55] the HBCU educators, AI ML, you know, educators program that is providing technology and AI education to these institutions so that they then can translate that into the kind of courses and content that's being provided for students. And that gives them a foundation in this new area of technology that helps to prepare them for being, you know, members of the workforce tomorrow that we all are looking to hire.
[00:13:23] Right. So, so that's an example, or, you know, just being able to provide some of our low cost or free even education programming, you know, with what's in our skill builder or investments in what we now call AI future, future ready, which is, you know, our investment and commitment in educating and equipping people with for the workforce in the future. And we've doubled down on our investment
[00:13:51] around that all of these things are ways in which we're providing access and it's part of our commitment to do so to, you know, drive a level of adoption, but also drive and increase the level of opportunity that people have to engage. And for business leaders that could be listening all over the world, they often struggle with the practical side of responsible AI. So when a company asks,
[00:14:18] hey, where do I begin? What are the first steps you, that you usually advise before they even touch a model? Well, and I'm gonna, I'm gonna make a reference to, uh, to the responsible AI lens again, because I think this is a great resource that people can have a look at and use as a starting point. Um, because of this release, it's not only, you know, integrated into our, uh, console, uh, for
[00:14:46] AWS and well-architected, but, um, it's also a document that people can download. It's roughly 60 pages. So there's a lot of content there, but if people really wanted to figure out like, where do I start? This is a set of questions that begins, or that starts at the beginning, right? When you're thinking about designing the application or your AI use case that you now have, you know, some framing to be able to leverage. Now, before we had that, I would say to, uh, organizations to think about,
[00:15:16] you know, what does responsible AI mean to them? Um, oftentimes we would come up with a strategy or, uh, companies might think of a set of, uh, principles or for us, we call them dimensions that are the areas of focus that we want to pay attention to as responsible AI. And then the second part of that is having the leadership alignment and commitment and educating our, our, you know, the organization
[00:15:41] around that. Uh, and that's important because we want to drive this sort of culture of understanding and responsibility around that. And so those are often, you know, a couple of steps just to start, right? Defining what it is for the company and how we're going to commit to that and then equipping and educating people so that they can be a part of making that real in the organization. And from the outside looking in your work seems to intersect with education and also the future of
[00:16:09] work, which is so important now, especially with a lot of entry roles disappearing. So for the younger people listening, what skills should young people and even people in mid careers, what should they be focusing on as AI becomes part of our everyday workflows? It goes without saying that they have to have some AI literacy, right? I'm old enough to remember when we started to make a computer literacy and an internet literacy core part of criteria for,
[00:16:36] uh, for individuals that were looking for work. And it's becoming the same that people expect that you have some understanding and ability to navigate with AI. That means you understand how to create a prompt and why that's important to get the output that you want, for instance. And, and since many of our jobs now and the work that we do is using AI tools, that kind of familiar familiarity is important. But then the next thing that I think is really important, especially for young
[00:17:04] people, is critical thinking and problem solving. Um, I can't overemphasize how important that is because we can have the systems or the technology do certain things, but we still need people to be able to look at the context and understand the conditions and be able to think strategically about how and when we apply AI, for instance, or how do we look at our own respective disciplines and,
[00:17:30] and figure out ways in order to leverage the technology to solve a problem in that area. Um, and that's, that's critical thinking and problem solving. We can't abandon that. And one of the reasons we can't is because AI is not perfect, right? And so we still need our own judgment, uh, and intellect to bring to bear when we're using these tools. DR. MARK BLYTH Such an important point. And going back to the access,
[00:17:56] uh, talking about a few moments ago, communities that have previously felt excluded from previous waves of technology. One of the things that seems to be happening is they're stepping forward now with new ideas powered by generative AI, which is a great thing. So what stories have stayed with you that show this shift in action? Because I just love the part that the fact that anybody with any idea can bring it to life now. I'm curious if you've seen anything like that and people are... DR. MARK BLYTH I've heard of so many creative things, uh, that, uh,
[00:18:25] that have been, uh, used or that people are using AI for, um, you know what? I don't know if you've had an opportunity to check this out, but on the expo floor, there's an area where they call it like the, our builders area and it's internal, uh, uh, developers or internal resources, uh, employees from Amazon or AWS who have come up with very, very interesting ideas, uh, to be able to, and they're
[00:18:52] showcased, uh, right down, you know, in the middle of re-invent. These are people that, um, thought of something very creative on their own time and have built technology using our services. And so, um, one of the things that I saw was, uh, you know, someone has, uh, devised a way to, um, to provide a coaching to, for people to provide their, their swing and golf. And, and so you,
[00:19:20] um, it's, it's really interesting, right? We were even, they were even talking about like the next set of features that they would incorporate, like perhaps even like virtual reality to be able to, uh, uh, you know, you would take a swing and it would capture with vision, computer vision, the way in which you swing and to be able to provide, you know, feedback to be able to improve that. And I was like, wow, I'm sure there's so many executives running around that love that and
[00:19:46] will want to be able to perfect their swing. I am terrible at golf. And so it was really interesting for me to, to have a look at. There was, you know, someone down there that was also, um, allowing you to custom create your own comic book. I did that because I have sons and, you know, comic books and video games are really, really cool. So, so being able to do it. So those are like really fun ideas, uh, perhaps not as useful, but, but I've also, uh, you know, seen folks doing some really
[00:20:12] creative things, you know, in, in space of, um, healthcare and, and others really wanting to provide, um, new ways to be able to address some of the challenges, right? Um, we have a customer that, that was, uh, looking at, uh, uh, you know, being able to protect the, the likelihood of an individual having heart disease so that we could provide preventative care.
[00:20:38] Like that is, you know, a novel, I guess, way to look at AI, but those are the kinds of use cases that are becoming much more practical and we can see ways of curing, you know, diseases or things that we all become accustomed to living with or perhaps dying from that perhaps we, we won't have to in the future. And that's, that's some of the areas that really excite me about, about what, what, what, what is possible and what might come in the future with the technology.
[00:21:05] And obviously we're recording this at the last big tech event of the year. We've got one eye on 2026. So as you look ahead to the next chapter, what gives you confidence that responsible AI can be a both a safeguard and a catalyst for creativity, inclusion, and long-term economic opportunity? What excites you about 2026 and beyond? What excites me? I thought you were going down the path of asking me like to predict something. And I
[00:21:31] was, I always say, I'm like, how can we predict? We never know what, um, what new way shifts and changes the technology is going to take. But one of the things I think that is, uh, exciting in part, it's related to what I shared with you about like what's happening with generative AI and how that's accelerated our focus on responsible AI. I think that more companies are looking for that. We are
[00:21:55] releasing more services around that capabilities, whether it's guardrails or, you know, model evaluation so that we could help people unpack those areas of risk and address those. I think, um, the, the, the possibility that we actually, uh, can, uh, see the integration of responsible AI with AI, then that there's no distinction, right? It's just the way in which we build and that everybody
[00:22:20] has a better understanding of it being our shared responsibility to deliver a technology that works well. Um, of course, that's going to achieve your objective from a business perspective, but also works well for all of the stakeholders, all of our customers, uh, and hopefully for all of society. And finally, when you leave the conference, you've had so many conversations, so many interviews when you're on that plane ride home and finally just take a moment to sit there and think about everything
[00:22:48] you've seen and heard. What are you going to be reflecting on most from this year's event? It's probably the conversations that I've had with startups. So we had an event yesterday. Um, it's one of the things that we get to do. I get to do on the team and from a policy perspective is to bring in the voice perspective of our startups and some of our smaller businesses into policy
[00:23:11] conversations. And we had a session where, um, we were unpacking just the opportunity to partner with those, uh, companies, uh, to bring in their voice and perspective. We believe that policymakers should hear for them as well. And, and there were, uh, there was a group from, um, from India. Uh, we actually had a number of small, uh, uh, startups that came, uh, founders that were in the room just talking about
[00:23:38] some of their innovation and the opportunity, but really interested, uh, and appreciated of, of AWS thinking about and trying to figure out ways to bring their voice to the table. And so when you talked about like not having access or, or how do we create opportunities for access? That's another example of that. And that I can be a part of that, but, but that we also are investing in ways
[00:24:04] where at a company level that we are actually creating access and opportunity, not just for businesses in India, but we had some from other areas and regions of APJ and in the UK, really great, uh, startup that I spoke with there and as well as right here in the U S. Um, and, and that, that's sort of core to, you know, some of the reason why I have a commitment to responsible AI,
[00:24:29] like the idea that we can use the technology in ways that yes, help drive business and that we could see, you know, monetary benefit from that, but we also can really see some good and that we can create opportunity for others. And so, um, I think that's what I'll reflect on. Wow. I think that is an incredibly powerful moment to end on for everybody listening. I'll include links to the 60 page document that you mentioned. People can check that out and your social channels and some of your work so people can get involved there as well, but more than anything, just thank you for joining me.
[00:24:58] Thank you so much for having me. I think speaking with my guest today offered a timely reminder that responsible AI is less about slowing innovation and more about shaping it so people can trust the outcomes. And her perspective shows just how far the field has come from early conversations about machine learning risks to today's structured frameworks, model evaluations, and indeed education programs are all equipping
[00:25:24] people at every stage of their career. And the part that stayed with me most was her belief that responsible AI only becomes meaningful when it is woven into everyday decisions rather than just treated as a separate concern. And yeah, culture matters, leadership matters, and understanding how to question a system is equally as important as knowing how to build one, build one that shifts.
[00:25:52] And that shift can shape how organizations not only adopt AI, but how future workers can prepare for a world where judgment, problem solving, and clarity will be equally as valuable as a technical skill. So if you've listened to this conversation and found yourself thinking about your own approach to AI, I'd love to hear your thoughts on anything you took away from our conversation today.
[00:26:17] What part of responsible AI feels most important to you right now? And where do you think the biggest gaps still remain? As always, send me an audio message over at techtalksnetwork.com. Equally, you can email me techblogwriteroutlook.com or send me a DM on LinkedIn, X, Instagram, just at Neil C. Hughes. But that is it for today. So thank you for listening as always. I'll speak with you again tomorrow.
[00:26:45] But keep those messages coming in and I'll meet you here, same time, same place tomorrow. Bye for now.

