What does it take to truly harness the power of AI in a way that augments rather than replaces human capabilities? In this episode of the Tech Talks Daily Podcast, I sit down with Ash Gawthorp, Chief Academy Officer at Ten10, to explore the critical intersection of AI and human-centric innovation. With a backdrop of PwC's 2024 Global CEO Survey revealing that 69% of CEOs anticipate the need for their workforce to develop new skills due to AI, Ash sheds light on how Ten10 is pioneering comprehensive training programs that equip employees with both technical expertise and essential soft skills.
Ash delves into how Ten10's training initiatives demystify AI, emphasizing its mathematical foundations rather than presenting it as a "magic" solution. By providing hands-on experiences that foster confidence and understanding, Ten10 empowers employees to evaluate, pilot, and implement AI tools securely and effectively. Ash also discusses the importance of holistic development, blending technical proficiency with communication, collaboration, and adaptability, ensuring that workers are prepared to navigate the evolving landscape of AI with confidence.
How can organizations foster a culture of continuous learning and ensure their teams are ready to drive meaningful innovation with AI? Join us as we explore these themes and more. Tune in, and don't forget to share your thoughts—how do you see AI shaping the future of work in your organization?
[00:00:01] [SPEAKER_00]: Are you curious about the future of working in an AI-driven world?
[00:00:07] [SPEAKER_00]: Well, according to PWC's 2024 Global CEO Survey, 69% of CEOs anticipate that AI will
[00:00:15] [SPEAKER_00]: require most of their workforce to develop new skills.
[00:00:19] [SPEAKER_00]: But what does this mean for both the employees and the organisations alike?
[00:00:24] [SPEAKER_00]: Today I'm joined by Ash Gorthall, Chief Academy Officer at a company called 1010 and he's
[00:00:31] [SPEAKER_00]: going to be sharing his insights on how AI can be harnessed to drive human-centric innovation.
[00:00:38] [SPEAKER_00]: And I want to explore how AI can augment human capabilities, the importance of comprehensive
[00:00:43] [SPEAKER_00]: training programs ensuring nobody gets left behind and also why demystifying AI is crucial
[00:00:50] [SPEAKER_00]: for organisational success.
[00:00:52] [SPEAKER_00]: Yes.
[00:00:53] [SPEAKER_00]: So, big question.
[00:00:54] [SPEAKER_00]: How can we prepare our workforce to thrive in this evolving landscape, bring people in
[00:01:00] [SPEAKER_00]: from a variety of backgrounds into the tech industry?
[00:01:03] [SPEAKER_00]: These are just a few of the things we're going to discuss today.
[00:01:06] [SPEAKER_00]: Reaching listeners in 165 countries every day is testament to the unwavering support
[00:01:12] [SPEAKER_00]: of you my listeners and our sponsors without whom this podcast simply wouldn't be possible.
[00:01:18] [SPEAKER_00]: And it also gives me a chance to talk about the fact that legacy DRM failed to securely
[00:01:23] [SPEAKER_00]: enable external collaboration, especially on sensitive files and how every organisation faces
[00:01:29] [SPEAKER_00]: this risk-trust contradiction where they can share content with untrusted third parties
[00:01:34] [SPEAKER_00]: yet expected to protect that data.
[00:01:37] [SPEAKER_00]: So it's time for something more modern.
[00:01:39] [SPEAKER_00]: A DRM solution that solves that dilemma without compromising security or productivity.
[00:01:44] [SPEAKER_00]: And you can do all that with a company called KiteWorks that will enable you to say goodbye
[00:01:49] [SPEAKER_00]: to deployment headaches, file transferries, collaboration barriers and productivity constraints.
[00:01:55] [SPEAKER_00]: So you can experience a more modern way to collaborate on sensitive content without sacrificing control
[00:02:01] [SPEAKER_00]: or security.
[00:02:02] [SPEAKER_00]: Please visit kiteworks.com to get started today.
[00:02:05] [SPEAKER_00]: That's kiteworks.com to get started today.
[00:02:08] [SPEAKER_00]: Now is the moment you've really been waiting for.
[00:02:11] [SPEAKER_00]: It's time to get today's guests on.
[00:02:13] [SPEAKER_00]: So buckle up and hold on tight as I beam your ears all the way to York here in the UK
[00:02:19] [SPEAKER_00]: where Ash is waiting to join me.
[00:02:24] [SPEAKER_00]: So a massive warm welcome to the show, Ash.
[00:02:27] [SPEAKER_00]: Can you tell everyone listening a little bit about who you are and what you do?
[00:02:31] [SPEAKER_01]: Hi Neil.
[00:02:32] [SPEAKER_01]: Well thanks very much.
[00:02:32] [SPEAKER_01]: It's an absolute pleasure to be invited along.
[00:02:34] [SPEAKER_01]: I'm Ash.
[00:02:35] [SPEAKER_01]: I'm the Chief Academy Officer at 1010.
[00:02:38] [SPEAKER_01]: Within 1010, I work within our 1010 Academy essentially, which is all around bringing in
[00:02:44] [SPEAKER_01]: technologists of the future, attracting them, selecting them, trading them, supporting them,
[00:02:49] [SPEAKER_01]: steering them through roles through the roles which they're going to be best at.
[00:02:52] [SPEAKER_01]: But doing that without necessarily having the technical experience or what you might
[00:02:56] [SPEAKER_01]: consider to be a traditional technical background essentially.
[00:03:00] [SPEAKER_01]: And that's pretty much what I do most of the time today.
[00:03:03] [SPEAKER_01]: We've been running it now for 11 years.
[00:03:05] [SPEAKER_01]: So it's fair to say that over the years we've
[00:03:07] [SPEAKER_01]: changed it quite a lot from what it was in the time we started.
[00:03:11] [SPEAKER_00]: And that's one of the reasons I invited you on the podcast today because I think there's so much
[00:03:15] [SPEAKER_00]: hype around technology, around AI, cybersecurity and so many different fields.
[00:03:20] [SPEAKER_00]: And there's a real shortage of people in these industries and a lot of people from a variety
[00:03:25] [SPEAKER_00]: of different backgrounds almost rule themselves out saying, I'm not techie enough or I'm not
[00:03:30] [SPEAKER_00]: this, I don't fit that.
[00:03:31] [SPEAKER_00]: But the reality is it's a more human centric, the human skills that I really
[00:03:36] [SPEAKER_00]: needed in this industry and also diverse thinking.
[00:03:39] [SPEAKER_00]: We're going to solve complex problems we need people thinking differently and from
[00:03:43] [SPEAKER_00]: diverse backgrounds to solve those problems.
[00:03:45] [SPEAKER_00]: So can you begin by setting the scene for our conversation today by just explaining
[00:03:50] [SPEAKER_00]: the concept of a more human centric innovation in the context of AI and how it can augment
[00:03:56] [SPEAKER_00]: rather than replace these human capabilities which are our biggest strengths, right?
[00:04:01] [SPEAKER_00]: Yeah, yeah, sure.
[00:04:01] [SPEAKER_01]: So I think human centric innovation kind of boils down to a few key points essentially.
[00:04:07] [SPEAKER_01]: Firstly, I think one of the most important things is around transparency and accountability.
[00:04:13] [SPEAKER_01]: So what that means to me is that when organizations create and use AIs, they need
[00:04:18] [SPEAKER_01]: to be very clear on when the AI is being used, how it works, who's responsible for it.
[00:04:24] [SPEAKER_01]: So ultimately comes down to trust.
[00:04:26] [SPEAKER_01]: I mean, there's a great example I can think of where legislation was passed in the state
[00:04:29] [SPEAKER_01]: of New York where AI tools were being used in a great deal by recruitment firms.
[00:04:35] [SPEAKER_01]: And so essentially they were using AI tools to screen people's CVs and resumes,
[00:04:41] [SPEAKER_01]: but also going away to look at their social media posts on the internet.
[00:04:45] [SPEAKER_01]: And off the back of that essentially determine whether individuals would move forward to the next
[00:04:49] [SPEAKER_01]: round. Now obviously that's quite disturbing because one, they won't be clear about that's
[00:04:55] [SPEAKER_01]: what they're actually doing and what these tools are doing.
[00:04:57] [SPEAKER_01]: Secondly, the AI vendors themselves often aren't very transparent around exactly how that tool works,
[00:05:04] [SPEAKER_01]: what data uses, how it comes up with those outcomes.
[00:05:08] [SPEAKER_01]: And so off the back of that they ended up in a situation where they're actually legislating
[00:05:12] [SPEAKER_01]: that an individual now by law if they are putting their resume forward for a role,
[00:05:19] [SPEAKER_01]: the recruiter has to say whether they're using an AI tool, give description of it,
[00:05:23] [SPEAKER_01]: description of how it works. Or crucially the individual has to have the right to be able to say
[00:05:30] [SPEAKER_01]: I don't want my CV to be screened using an AI tool, I would like it to be screened
[00:05:35] [SPEAKER_01]: using a human being. Thank you very much because it's actually as you mentioned there,
[00:05:39] [SPEAKER_01]: you've got a lot of historically marginalized groups and other biases that can creep in
[00:05:43] [SPEAKER_01]: and to have individuals removed from that often very important decisions in their lives
[00:05:50] [SPEAKER_01]: without human being involved is quite troubling. I think the other point is around
[00:05:57] [SPEAKER_01]: treated with security and privacy. You need to understand how that data is being used and for
[00:06:02] [SPEAKER_01]: how well it's been used and for what purpose. And I think as well from a security point of view,
[00:06:07] [SPEAKER_01]: if it's just limited to something that you create that's quite sort of boxed, let's say
[00:06:12] [SPEAKER_01]: using chat GPT to generate a paragraph of text, for example, as long as the text reads fine,
[00:06:19] [SPEAKER_01]: that's okay or if it's generating an image, for example. But if you're using these tools to create
[00:06:24] [SPEAKER_01]: code, for example, there are concerns around whether one weather that could contain any sort
[00:06:30] [SPEAKER_01]: of malicious code which could have some sort of exploit in there from a vulnerability point of
[00:06:35] [SPEAKER_01]: view or even potentially if organizations are uploading developers are uploading source code into
[00:06:41] [SPEAKER_01]: the cloud getting a generative AI tool to be able to rewrite it, for example, or refactor it,
[00:06:46] [SPEAKER_01]: your IP and source code then becomes the training data of tomorrow's model. And once you've put it
[00:06:52] [SPEAKER_01]: up there, you can't get it back. So there's IP considerations and security considerations.
[00:06:57] [SPEAKER_01]: I think maybe the last point, and I think this is by far the hardest ones,
[00:07:01] [SPEAKER_01]: is sort of where it comes from and its purpose really. And this is quite a sort of aspirational
[00:07:06] [SPEAKER_01]: aim, I guess. But if it's done with people for people developing AI apps for the good and
[00:07:13] [SPEAKER_01]: sort of the benefit of a community, which is understood by created by deployed managed by people,
[00:07:20] [SPEAKER_01]: then I think that's a very, very important point. What sadly, I think if we look at humanity as a
[00:07:28] [SPEAKER_01]: whole and kind of look back on what we've done in history, you would I think it's fair to say
[00:07:32] [SPEAKER_01]: that we don't have a great track record of making human centric choices for the right reasons,
[00:07:39] [SPEAKER_01]: essentially. I think there is a risk of it being used to maximize short-term profit, for example,
[00:07:47] [SPEAKER_01]: by replacing junior people within an organization who traditionally have joined that organization
[00:07:54] [SPEAKER_01]: and done a lot of the legwork and a lot of sort of cut their teeth in that industry through doing
[00:07:58] [SPEAKER_01]: that often drudge admin sort of basic legwork to replace those individuals with AI and then
[00:08:05] [SPEAKER_01]: have a management tier, an experience tier that's able to leverage that AI to be able to achieve
[00:08:11] [SPEAKER_01]: a similar level of effectiveness to having a team of juniors beneath them. Yeah, I think at that
[00:08:16] [SPEAKER_01]: point we end up then sacrificing the future of industries for short-term profit essentially,
[00:08:22] [SPEAKER_01]: which is clearly the wrong thing to do. But whether that will end up actually coming
[00:08:27] [SPEAKER_01]: to be a thing or not, I think largely depends on an organizational basis and also
[00:08:33] [SPEAKER_01]: I think whether it can be legislated against us in the case of New York.
[00:08:37] [SPEAKER_00]: It's such an important point you raised there. I've been fortunate to go to a lot of tech
[00:08:41] [SPEAKER_00]: conferences around the world and predictably every single one of them is a vendor on stage
[00:08:45] [SPEAKER_00]: saying AI is a co-pilot saying all the right things. It might replace roles, but it won't
[00:08:51] [SPEAKER_00]: replace people and employees will be liberated freed from doing those repetitive mundane tasks
[00:08:56] [SPEAKER_00]: and do more meaningful work. But of course that only works if employers don't leave staff behind
[00:09:02] [SPEAKER_00]: and invest in re-skilling rather than going for that short-term profit as you just mentioned.
[00:09:07] [SPEAKER_00]: And according to I think it was a PWC Global CEO survey this year reported that 69% of global
[00:09:15] [SPEAKER_00]: CEOs anticipate the need for their workforce to develop new skills due to AI. So how is
[00:09:21] [SPEAKER_00]: 1010 addressing this need with your training programs and what are you seeing any trends you're
[00:09:27] [SPEAKER_01]: seeing around this? Yeah, yeah sure. So when we sort of first started putting this together and
[00:09:33] [SPEAKER_01]: first started seeing it, what we realized quite quickly is that you can talk, well,
[00:09:37] [SPEAKER_01]: Peewee just talked about AI and it's a very, very broad terms which encompasses a very broad
[00:09:43] [SPEAKER_01]: range of disciplines but also a very, very level of skill levels essentially. And after doing a
[00:09:49] [SPEAKER_01]: lot of sort of headscratching, what we kind of came up with is essentially we kind of group this
[00:09:54] [SPEAKER_01]: into three tiers if you like in terms of the skill sets and what they're used for and how
[00:10:01] [SPEAKER_01]: experienced and how much expertise in different disciplines people need to do them. So at the
[00:10:06] [SPEAKER_01]: very bottom we have this kind of level wall essentially, this sort of AI awareness that
[00:10:10] [SPEAKER_01]: says really around what AI can do, the restrictions of it, the considerations around data around
[00:10:18] [SPEAKER_01]: what happens to that data and really around what it can actually do. Now for me this isn't so much,
[00:10:27] [SPEAKER_01]: we do teach this to everybody in our business actually but for me this kind of thing needs
[00:10:31] [SPEAKER_01]: to go much wider than that Neil. It needs to be at some sort of government level essentially
[00:10:36] [SPEAKER_01]: when the kind of things that we're talking about are things which can have a huge impact
[00:10:42] [SPEAKER_01]: at a very high sort of population level. There are certain sacred cows that we sort of
[00:10:48] [SPEAKER_01]: taken for granted for decades that just are no longer true essentially. So for example,
[00:10:54] [SPEAKER_01]: if we take something like the judicial system for a long time, if you had video evidence of
[00:10:59] [SPEAKER_01]: actually somebody committing a crime when you're catching them red-handed or you had
[00:11:03] [SPEAKER_01]: an audio recording of somebody admitting to that crime then that was submissible as evidence
[00:11:08] [SPEAKER_01]: and that was kind of as good as it got. The fact that these things can now be created
[00:11:14] [SPEAKER_01]: and can be spoofed could have a huge impact and everybody needs to be aware of that and what
[00:11:19] [SPEAKER_01]: it's capable of because otherwise if people see things on social media and that can be a huge
[00:11:24] [SPEAKER_01]: problem if they see somebody saying something they just assume that that person really said
[00:11:28] [SPEAKER_01]: that but that's kind of level war and that's sort of the grounding if you like. We're then
[00:11:32] [SPEAKER_01]: going to the middle tier. So this is essentially about augmenting existing workflow and the roles that
[00:11:39] [SPEAKER_01]: people do with AI tools but also crucially the ability to be able to evaluate and differentiate
[00:11:48] [SPEAKER_01]: between different tools because as you mentioned there, you go to a lot of vendor conferences
[00:11:54] [SPEAKER_01]: and every vendor is talking about their latest AI tool. I think sometimes you've got to take
[00:11:59] [SPEAKER_01]: that with a pinch of salt. I think there are a lot of tools that are badged as AI but actually
[00:12:02] [SPEAKER_01]: aren't under the bonnet essentially they're more sort of traditional sort of rules-based
[00:12:07] [SPEAKER_01]: software and not really and also they need to be able to understand how to bring it into
[00:12:13] [SPEAKER_01]: their daily work and use it within the constraints of data privacy, security to make
[00:12:19] [SPEAKER_01]: sure that they don't put themselves in danger or the organization that they're working for
[00:12:24] [SPEAKER_01]: in danger essentially. So that's about how to do what they do the different roles that they do
[00:12:30] [SPEAKER_01]: within the organization but how to use existing and future AI tools to be able to augment that.
[00:12:35] [SPEAKER_01]: We then kind of have a third tier if you like which is all about individuals who are able to
[00:12:41] [SPEAKER_01]: implement new AI tools and models. Now obviously not everybody does this, everybody in the
[00:12:48] [SPEAKER_01]: business does that sort of middle tier but as top tier is for individuals that throughout the training
[00:12:53] [SPEAKER_01]: have shown an aptitude for those sort of very strong technical aspects and have an interest in
[00:13:00] [SPEAKER_01]: this and sort of want to take it to the next level essentially. So to some extent all three tiers of
[00:13:06] [SPEAKER_01]: those individuals are working in AI but there's a very, very different level of expertise required
[00:13:12] [SPEAKER_01]: and a very, very different use case in terms of what they're actually doing with it.
[00:13:16] [SPEAKER_00]: And if we have anybody listening in the corporate space maybe they have transferable
[00:13:21] [SPEAKER_00]: skills wanting to reskill and enter a tech career or someone just starting out at the beginning of their
[00:13:27] [SPEAKER_00]: career thinking about entering a tech career. What are they foundational AI knowledge and
[00:13:31] [SPEAKER_00]: specialized pathways does 1010's training program include because it feels like there's
[00:13:36] [SPEAKER_00]: a lot of opportunities here so I'd love to hear more about your programs and how they cater
[00:13:41] [SPEAKER_00]: to employees at a variety of different skill levels because we're not just talking about
[00:13:45] [SPEAKER_00]: techies here are we we're bringing more people into the industry.
[00:13:47] [SPEAKER_01]: Well absolutely there's a great phrase I don't know who came up with this but essentially
[00:13:51] [SPEAKER_01]: not all roles in tech require tech and in my mind there's a lot of the sliding scale between
[00:14:00] [SPEAKER_01]: a lot of those business spacing to softer skills if you like and a lot of the
[00:14:05] [SPEAKER_01]: extended to be the more hard technical skills and everything every role exists on a
[00:14:10] [SPEAKER_01]: spectrum essentially with a variety of different levels of those two things involved.
[00:14:15] [SPEAKER_01]: And you mentioned about people cross training and coming into the industry I think lots of people
[00:14:19] [SPEAKER_01]: when they want to get into tech they're very much aware of the things that they don't know but
[00:14:23] [SPEAKER_01]: they often forget a lot of the important and good skills that they have expertise in that
[00:14:28] [SPEAKER_01]: they kind of rid with them particularly on soft skills side. But essentially our training
[00:14:32] [SPEAKER_01]: it starts at the most basic level we don't mandate any academic qualifications from a
[00:14:39] [SPEAKER_01]: tech degree or any degree perspective individuals have to pass our own assessments
[00:14:45] [SPEAKER_01]: our own technical assessments which gauge technical aptitude but don't assume any prior
[00:14:50] [SPEAKER_01]: tech experience that allows us to determine people's aptitude for it and really it's
[00:14:55] [SPEAKER_01]: about them demonstrating towards their passion for tech and their desire to want to get into
[00:15:00] [SPEAKER_01]: the industry and also assess around a lot of the soft skills which is so so important in
[00:15:06] [SPEAKER_01]: modern collaborative tech roles. In terms of the actual training we don't assume any prior
[00:15:12] [SPEAKER_01]: technology we start from the very very basics and that builds over a number of months essentially
[00:15:19] [SPEAKER_01]: in terms of complexity in terms of the scope of what we do and I think part of this then is
[00:15:25] [SPEAKER_01]: really around helping people identify what they're good at or what they like because there
[00:15:29] [SPEAKER_01]: are so many different roles in tech and people find a home where they're much better at
[00:15:34] [SPEAKER_01]: some roles than others. It turns to some of that foundational AI knowledge. One question that we
[00:15:40] [SPEAKER_01]: kind of struggled with a little bit at the start was just how do you approach that? How do you sort
[00:15:45] [SPEAKER_01]: of get into that AI training? Now traditionally the approach is that you take a bottom up
[00:15:52] [SPEAKER_01]: approach so you start off with individuals who are at school who enjoy maths they do a GCSE in
[00:16:00] [SPEAKER_01]: it they're good at it they do an A level in it then maybe go do a degree in it or either pure maths
[00:16:05] [SPEAKER_01]: or in some sort of aligned subject like engineering or something similar and they teach all the
[00:16:11] [SPEAKER_01]: basics if you like the fundamental mathematical building blocks of AI machine learning so probability
[00:16:16] [SPEAKER_01]: statistics linear algebra calculus all that good stuff and then that's built on top of until
[00:16:22] [SPEAKER_01]: you get to a point where you're able to understand a deep and fundamental levels of
[00:16:28] [SPEAKER_01]: how machine learning and AI works that does work but that takes a long long time and you lose a lot
[00:16:34] [SPEAKER_01]: of people along the way and from my own experience I was one of those individuals where I did an engineering
[00:16:39] [SPEAKER_01]: degree electronic engineering degree I always wanted to work in that field but I hated pure
[00:16:44] [SPEAKER_01]: maths and so I just had to sort of put the blinkers on essentially it just say this is a
[00:16:48] [SPEAKER_01]: means to an end and stick it out. There are a lot of individuals and lots of people who don't
[00:16:53] [SPEAKER_01]: have that endpoint in their mind and that end game so they just do maths at school it bores them rigid
[00:16:59] [SPEAKER_01]: to be honest and then they're gone and very hard to get them back in again so we kind of took
[00:17:05] [SPEAKER_01]: an alternative approach which is more sort of top down essentially um so you show people what it can
[00:17:11] [SPEAKER_01]: do you show people the end game if you like you show sort of the fun stuff from what it
[00:17:17] [SPEAKER_01]: can achieve and then you kind of go down that tree to a certain level and you go down as
[00:17:22] [SPEAKER_01]: far as people need to know it so maybe a good example here is possible in that with just a few
[00:17:28] [SPEAKER_01]: lines of Python code you can build a neural network and a sort of form of generative AI again the
[00:17:35] [SPEAKER_01]: generative serial network that's able to do quite cool things it's able let's say one example it's
[00:17:42] [SPEAKER_01]: able to generate hand-drawn numerals between zero and nine so you start off and you're viewing
[00:17:48] [SPEAKER_01]: the output of it as this thing's being trained and at the start it just looks like an old
[00:17:52] [SPEAKER_01]: detuned analog TV if you know what I mean it's static essentially black and white static
[00:17:57] [SPEAKER_01]: and then as it's trained over the course of minutes or hours it actually um you start to see
[00:18:05] [SPEAKER_01]: patterns within the output and then after a while you get to a point where it's actually
[00:18:09] [SPEAKER_01]: producing every time recognizable digits between zero and nine for example so that's that's cool
[00:18:16] [SPEAKER_01]: that's impressive it's got a low overhead people are seeing that running on their laptop
[00:18:23] [SPEAKER_01]: so it kind of makes them feel like I did that that that's something that I've done and they
[00:18:27] [SPEAKER_01]: don't understand the intricacies of it but they're aware of the fact that they were able to do
[00:18:31] [SPEAKER_01]: something that resulted in that which a lot more to you is much more accessible than having
[00:18:36] [SPEAKER_01]: somebody spend it five six seven years study a math to get to a point where they could start
[00:18:42] [SPEAKER_01]: so I thought that grounding of having somebody building something and seeing something is really
[00:18:48] [SPEAKER_01]: really fundamental important to to building confidence in terms of ultimately what roles
[00:18:54] [SPEAKER_01]: people end up doing as I mentioned they'll end up doing a role which is the role that they do
[00:19:00] [SPEAKER_01]: but it is augmented by AI a little bit now but more and more in the future
[00:19:05] [SPEAKER_01]: or we have people who are involved specifically in roles around AI and ML so whether that's
[00:19:11] [SPEAKER_01]: choosing models training data deploying those models managing them or working on the data side
[00:19:17] [SPEAKER_01]: around trying to understand the scope of data how long it can be used for I mean when you kind of
[00:19:23] [SPEAKER_01]: look at it you know one analogy that springs to mind I think of that you remember there were
[00:19:27] [SPEAKER_01]: adverts a few years ago where they had a picture of a fighter jet there with the RAF pilot next
[00:19:32] [SPEAKER_01]: to and then the next scene cut to the back there was a team of 50 around the aircraft
[00:19:37] [SPEAKER_01]: and then those were sort of the supporting crew if you like that pilot couldn't fly without those
[00:19:42] [SPEAKER_01]: guys and I think it's very very much the same in a lot of these industries you think of one
[00:19:47] [SPEAKER_01]: particular role but in actual fact it's supported by a lot of people doing a lot of different things
[00:19:52] [SPEAKER_00]: I remember that advert it's such a powerful message and one of the things I love about what
[00:19:57] [SPEAKER_00]: you're doing here is are you're helping people yes gain those technical skills to get that
[00:20:01] [SPEAKER_00]: advantage but also the soft skills to succeed so can you tell me a tell me a bit more about that and
[00:20:07] [SPEAKER_00]: how 1010 equips employees with both those technical and soft skills so they can succeed in this AI
[00:20:14] [SPEAKER_00]: driven world because it's not just tech it is a combination of both yeah I saw so I think
[00:20:20] [SPEAKER_01]: it comes down really to what we found in experience as it comes down to practice
[00:20:26] [SPEAKER_01]: it's not just about knowledge you can tell somebody something you can teach them it
[00:20:31] [SPEAKER_01]: but then actually but the practice is actually about them applying it in a situation that's
[00:20:38] [SPEAKER_01]: just does this real well that you can possibly make it allowing people the freedom to make mistakes
[00:20:43] [SPEAKER_01]: in that sort of environment when it's when it's not really when it does account and in fact
[00:20:48] [SPEAKER_01]: proactively encourage them to make mistakes and see how quickly they can do because that's
[00:20:52] [SPEAKER_01]: one of the best ways the best ways to learn essentially I mean one question in the industry
[00:20:58] [SPEAKER_01]: I think a lot that there's a lot out there is there's a lot of organizations that offer
[00:21:02] [SPEAKER_01]: sort of certifications and certificates in particular levels and particular tools and
[00:21:07] [SPEAKER_01]: particular technologies and I think a lot of people think to themselves well if I
[00:21:13] [SPEAKER_01]: study for that exam and get that certification then that will enable me to do the job
[00:21:18] [SPEAKER_01]: I think it kind of works the other way around if you have the skills and experience
[00:21:23] [SPEAKER_01]: then you could probably sit the exam tomorrow and pass it in many cases
[00:21:27] [SPEAKER_01]: but it doesn't necessarily work the other way around there's a world of difference between
[00:21:31] [SPEAKER_01]: studying for an exam and actually being able to do it in practice so so much of what we try
[00:21:38] [SPEAKER_01]: and do in training is to make it practical and frankly to make it to make it not work
[00:21:42] [SPEAKER_01]: you come across these environments where things are so contrived and they really work
[00:21:48] [SPEAKER_01]: nicely and you just follow one step after the other and it all works perfectly
[00:21:52] [SPEAKER_01]: and then they come across a scenario in the real world where they get to step one and you can't
[00:21:56] [SPEAKER_01]: go to step two because it fails for some reason those problem-solving techniques
[00:22:01] [SPEAKER_01]: having given people a broad grounding across the different tech disciplines in terms of how
[00:22:06] [SPEAKER_01]: they solve it is fundamentally important in terms of what they're doing and it also allows
[00:22:11] [SPEAKER_01]: to build confidence as well and a lot of this is also around getting them to be able to communicate,
[00:22:18] [SPEAKER_01]: to be able to collaborate lots of people don't feel very confident and being able to stand up
[00:22:22] [SPEAKER_01]: in front of a group and present their ideas but when you do that in a group and you do it in a
[00:22:27] [SPEAKER_01]: group that they're familiar with a cohort that they get to know you're able to sort of help
[00:22:32] [SPEAKER_01]: people with that to bring them out of their shell and to allow them to do that in a
[00:22:36] [SPEAKER_01]: safe environment and build the confidence. Maybe one good example just to mention
[00:22:43] [SPEAKER_01]: so much of this as well the industry is around being able to navigate your way through
[00:22:47] [SPEAKER_01]: different teams and different organizations and they have
[00:22:50] [SPEAKER_01]: offering different biases, different views, different things that frankly keep them awake at
[00:22:56] [SPEAKER_01]: night and so if you're able to get an insight into that at some level then that will definitely
[00:23:02] [SPEAKER_01]: help you when you can land so as part of that we have areas of the training where we sort of role
[00:23:07] [SPEAKER_01]: play the different roles that people will find or likely to find when they land on a project
[00:23:12] [SPEAKER_01]: to their tech company and what makes us people tick, their roles and responsibilities, what their
[00:23:18] [SPEAKER_01]: fears are, what their concerns are and pretend to be those different individuals.
[00:23:22] [SPEAKER_01]: Yeah another example where when we're doing a part around the business analysis side and
[00:23:27] [SPEAKER_01]: requirements, solicitations or working out what it is that a system needs to do
[00:23:31] [SPEAKER_01]: will have one stakeholder come in and they'll say well this system needs to do X, Y, Z, A,
[00:23:37] [SPEAKER_01]: B, C which you know is notably everything's captured and that's all good and then
[00:23:41] [SPEAKER_01]: in the afternoon we'll have somebody else drop in another stakeholder and say
[00:23:45] [SPEAKER_01]: don't listen to that person this morning they're an absolute idiot. What the system
[00:23:49] [SPEAKER_01]: actually needs to do is 1, 2, 3, 4, 5, 6 and so then that's practical that's real world
[00:23:55] [SPEAKER_01]: and how do you kind of understand that and sort of move forward with that and that's
[00:23:59] [SPEAKER_01]: so much of part of it really when we built this we cut out all of us had experience ourselves
[00:24:05] [SPEAKER_01]: in delivering projects in tech and so we're really said to ourselves what can this
[00:24:12] [SPEAKER_01]: training look like that doesn't teach you the stuff that you learn in university
[00:24:16] [SPEAKER_01]: in the way that you learn at a university, how can we sort of distill that time to its essence
[00:24:20] [SPEAKER_00]: to give people the stuff that they really need. Absolutely love that and another thing I just
[00:24:26] [SPEAKER_00]: to expand on that I'd love to find out more about how your approach to hands-on training
[00:24:32] [SPEAKER_00]: helps employees evaluate pilot and test tools or AI tools safely ensuring that they can implement
[00:24:39] [SPEAKER_00]: AI tools without compromising things like privacy and security because responsibility
[00:24:45] [SPEAKER_00]: around this tech is something that is incredibly important and something that's
[00:24:50] [SPEAKER_01]: important to you when teaching your students too. Yeah absolutely I mean I guess it's really
[00:24:55] [SPEAKER_01]: around the methodology and the approach that's really important to this and so to that is around
[00:25:03] [SPEAKER_01]: understanding how you implement at any new technology I guess across a variety of different
[00:25:09] [SPEAKER_01]: variables across the tool itself across people across processes and really that teaching that
[00:25:15] [SPEAKER_01]: approach of you start small with something, you build confidence with it, you demonstrate the value
[00:25:22] [SPEAKER_01]: in a low risk environment, you don't sort of bet the farm on this thing, you demonstrate it early,
[00:25:29] [SPEAKER_01]: you demonstrate it often, you sort of win hearts and minds and then once you've done that you've
[00:25:34] [SPEAKER_01]: then kind of won the right to be able to then take it forward and to be able to spin another
[00:25:39] [SPEAKER_01]: pilot or to be able to be able to make it larger than that and I think one also would
[00:25:45] [SPEAKER_01]: very important point is not to just focus on and as technologists this is often the case to just
[00:25:50] [SPEAKER_01]: focus on what it will do sort of what the functional sort of aspects of this are in terms of what value
[00:25:56] [SPEAKER_01]: will it bring, what will it look like, what will it create. Just to the point earlier around the
[00:26:00] [SPEAKER_01]: annual fighter jet there, you're having to implement this within an environment where there
[00:26:06] [SPEAKER_01]: will be teams of people involved whose job it is is to look after privacy, security
[00:26:13] [SPEAKER_01]: and if you rock up there having explained to a number of senior stakeholders and covenants then
[00:26:18] [SPEAKER_01]: that this is amazing and it's going to be brilliant and let's say right we just need to put this into
[00:26:22] [SPEAKER_01]: life and we haven't thought about any of the considerations around privacy or data security
[00:26:27] [SPEAKER_01]: that obviously isn't going well so it's about also getting teams and individuals on board early
[00:26:34] [SPEAKER_01]: on in that process who are responsible for those things outside of what you might consider
[00:26:38] [SPEAKER_01]: to be ultimately what the tool is doing and if you do all those things and then essentially you've
[00:26:45] [SPEAKER_01]: got a much better chance of buy-in and it's succeeding and severely limiting the damage you can do.
[00:26:52] [SPEAKER_00]: 100% with you on that and something that I know as you're also passionate about is emphasizing
[00:26:57] [SPEAKER_00]: the importance of demystifying AI by understanding its basis in mathematical algorithms so can you
[00:27:04] [SPEAKER_00]: elaborate on that approach, how you're helping employees but also stakeholders,
[00:27:08] [SPEAKER_00]: grass-bay eyes capabilities and limitations because it's no longer just the tech department or the
[00:27:14] [SPEAKER_00]: IT department that's dealing with it, it's stakeholders right across the business isn't it?
[00:27:18] [SPEAKER_01]: Yeah absolutely I think that demystification of it is just really really important and fundamental
[00:27:24] [SPEAKER_01]: because if you don't do that then people just see it as this sort of black box that's
[00:27:30] [SPEAKER_01]: this magic essentially that's able to perform these amazing feats and if you do that then you often
[00:27:37] [SPEAKER_01]: miss the constraints of it and its limitations so one example of that that I remember from the
[00:27:46] [SPEAKER_01]: university essentially was to do with, if he is a classic example of so it was a military application
[00:27:53] [SPEAKER_01]: what they wanted to be able to do was to be able to take satellite images of the forest
[00:27:58] [SPEAKER_01]: and determine whether there was a tank in it or whether there wasn't a tank in it
[00:28:02] [SPEAKER_01]: and so the question they asked was can a machine do this, can we actually have a machine that is
[00:28:08] [SPEAKER_01]: able to look at thousands and thousands of satellite images and with a strong likelihood say whether
[00:28:13] [SPEAKER_01]: there's a tank there or not so they produced a team to do it so they gave them images
[00:28:17] [SPEAKER_01]: and these images essentially were pictures of a tank in a forest and they were tagged as a tank
[00:28:22] [SPEAKER_01]: or pictures of a forest with no tank and they were tagged as no tank essentially so they built
[00:28:28] [SPEAKER_01]: the model, they trained it using that data set and it worked really well you know it was able to
[00:28:34] [SPEAKER_01]: say with a very high degree of accuracy on the examples that they had whether there was a tank
[00:28:39] [SPEAKER_01]: in the forest or not they then took it out to the field and they said okay right let's see how
[00:28:44] [SPEAKER_01]: it works and it was awful, utterly awful. You didn't mean just as well off with it with a coin
[00:28:49] [SPEAKER_01]: and tossing it and say whether there was a tank or not off the back of that and obviously that caused
[00:28:53] [SPEAKER_01]: some embarrassment when they dug into it what they realized is that from the data set they'd use
[00:28:58] [SPEAKER_01]: all the images that had the tank in the forest were taken on a sunny day
[00:29:02] [SPEAKER_01]: and all the images that didn't have the tank in the forest were taken on a cloudy day
[00:29:07] [SPEAKER_01]: so what they'd actually built was an amazing machine which could detect
[00:29:10] [SPEAKER_01]: whether this whether the sun was shining or whether it was cloudy essentially
[00:29:14] [SPEAKER_01]: so what's really going on here is that the great thing about AI like that type
[00:29:20] [SPEAKER_01]: was mentioned is that it's able to find its own classification criteria by you just tagging those
[00:29:26] [SPEAKER_01]: images and then it ultimately works out whether it thinks there is something in that image
[00:29:31] [SPEAKER_01]: determined to the tag or not and whilst it finds its own classification criteria the lesson,
[00:29:36] [SPEAKER_01]: the hard lesson is that it might not be the one that you want if it's got no context of
[00:29:40] [SPEAKER_01]: what a tank is or what a tree is or anything like that and if you assume it's magic and you don't
[00:29:45] [SPEAKER_01]: know how it works then you're far more likely to just imagine that it's able to work in the way
[00:29:51] [SPEAKER_01]: that our brains do with context and everything else around that and it's not and I think that's
[00:29:57] [SPEAKER_01]: brilliant. Or then back to the points earlier around you're grounding this in something not
[00:30:01] [SPEAKER_01]: necessarily from the ground up in terms of maths but actually an approach that people understand
[00:30:06] [SPEAKER_01]: because you can demonstrate the concepts to people and have them understand the concepts
[00:30:13] [SPEAKER_01]: intuitively and in the language I don't think that requires the level of maths that traditionally
[00:30:17] [SPEAKER_01]: it's taught in. You can show somebody something and say if you eyeball a set of data points and
[00:30:23] [SPEAKER_01]: say okay can you see how those are clustered can you see that grouping there? Yes, okay I can.
[00:30:28] [SPEAKER_01]: That's very different to having to do the maths from first principles and prove how that works.
[00:30:34] [SPEAKER_00]: Those two things are very different. Something else I was reading about before you came on
[00:30:39] [SPEAKER_00]: the podcast when I was doing a little research on 1010 is how you do prioritize things like
[00:30:44] [SPEAKER_00]: holistic development and also emphasizing technical proficiency alongside communication,
[00:30:51] [SPEAKER_00]: collaboration and adaptability skills it seems incredibly well-rounded approach that you've
[00:30:56] [SPEAKER_00]: got here absolutely love it. Can you tell me a bit more about how you do that at 1010?
[00:31:00] [SPEAKER_01]: Yeah it's a combination of things really. It's training initially to determine where people's
[00:31:07] [SPEAKER_01]: attitude line and working out how we can identify any areas that we need to strengthen or sort of
[00:31:12] [SPEAKER_01]: find more training in and then it's about really working with the individual and ultimately
[00:31:17] [SPEAKER_01]: align to the role that they end up working in around what sort of training is required
[00:31:23] [SPEAKER_01]: and that's across the board so that's hard skills, that's soft skills and essentially
[00:31:27] [SPEAKER_01]: there's a number of mechanisms to do that so we have teams within 1010 who are there to
[00:31:32] [SPEAKER_01]: train individuals to be able to assist behind the scenes. We've also got a more senior consulting
[00:31:39] [SPEAKER_01]: team of several hundred individuals who were able to if they have questions or if they have
[00:31:44] [SPEAKER_01]: queries to be able to help out and answer those and then it's also a case of I think
[00:31:50] [SPEAKER_01]: holding people not accountable necessarily but what I found in my experience is that when
[00:31:57] [SPEAKER_01]: you say to people we start a lot on this program tell us where you think you want to get to will
[00:32:02] [SPEAKER_01]: help steer you towards that goal and give you guidance along the way but you need to do this
[00:32:08] [SPEAKER_01]: work, you need to do these things and everybody always starts off with the best of intentions
[00:32:13] [SPEAKER_01]: and sometimes just human nature is that those things fall by the wayside. People are busy,
[00:32:17] [SPEAKER_01]: they've got busy lives, they're doing a full-time job as well asking them to do something
[00:32:21] [SPEAKER_01]: outside of that it's often the first thing that slips because frankly nobody's jumping up
[00:32:26] [SPEAKER_01]: and down at them saying you must do this in the same way as they are saying you need to come to
[00:32:31] [SPEAKER_01]: work, you need to do this for work, you need to do the washing, do the laundry, mow the lawn,
[00:32:35] [SPEAKER_01]: whatever else that looks like and so it's really helping them put a framework in place
[00:32:39] [SPEAKER_01]: to kind of make sure that they are doing it and that's done through review but regular review.
[00:32:45] [SPEAKER_01]: I've worked with organizations before where you have like the dreaded annual performance review
[00:32:50] [SPEAKER_01]: where bring practice often not much happens for 10 months and then suddenly there's this mad
[00:32:55] [SPEAKER_01]: scramble in the last two months to kind of do it that doesn't help if it comes to 10 months in
[00:32:59] [SPEAKER_01]: and you haven't done the things that you hope to do it's much better to be able to check people
[00:33:03] [SPEAKER_01]: regularly or often in an environment which doesn't feel like it's that sort of adversarial
[00:33:09] [SPEAKER_01]: well have you done this haven't you done that environment and get people to talk them through
[00:33:13] [SPEAKER_01]: it. I think the other thing is that you can set a sort of development plan in terms of what
[00:33:18] [SPEAKER_01]: people want to do but so often it changes, things come along out of left field and
[00:33:23] [SPEAKER_01]: other things need to change so I think that's fine as well that's perfectly okay to recognize that
[00:33:29] [SPEAKER_01]: you thought you were going to be doing this and you're not going to be doing this that's just an
[00:33:33] [SPEAKER_01]: age of it and the way that things change. So that's fine plans adapt and as long as you
[00:33:37] [SPEAKER_01]: continue to learn and you are learning I think that's the most important thing so
[00:33:43] [SPEAKER_00]: as incremental steps. 100% with you there and everything is going to continuously change
[00:33:49] [SPEAKER_00]: we are going to have to continuously adapt so looking ahead how do you see that role of continuous
[00:33:54] [SPEAKER_00]: learning in posturing a culture that drives meaningful innovation within organizations
[00:34:00] [SPEAKER_00]: particularly in this rapidly evolving AI landscape that's only going to continue this way
[00:34:04] [SPEAKER_00]: anything you can share and how you see this evolving in the future.
[00:34:08] [SPEAKER_01]: Yeah I think like I think like learning anything there's a there's a few key themes here so
[00:34:14] [SPEAKER_01]: firstly if you want to if you want to learn you do have to create a habit of it I think that's kind
[00:34:23] [SPEAKER_01]: of fundamentally important really that you need to set aside time to do it so it almost just
[00:34:30] [SPEAKER_01]: becomes autonomous and there are ways and means to do this so you can there's some forms of
[00:34:36] [SPEAKER_01]: learning which you can do whilst you are doing something else there's some forms of learning
[00:34:40] [SPEAKER_01]: that demand your absolute attention and focus and let's say that's the hardest stuff to fit in for
[00:34:46] [SPEAKER_01]: short but I think as well it's hard to do alone I think if you get together with a group of people
[00:34:53] [SPEAKER_01]: where you're able to share your thoughts share your discoveries and share your failures frankly
[00:34:56] [SPEAKER_01]: and then sort of make it fun I think that can help massively with motivation I think it was
[00:35:01] [SPEAKER_01]: just you doing it yourself and you're not really being held to account or nobody's really
[00:35:10] [SPEAKER_01]: ensuring that you do it that that can be a problem not always but it can be and I think
[00:35:16] [SPEAKER_01]: if that happens then really it's a question of setting a target at a sort of weekly monthly
[00:35:21] [SPEAKER_01]: levels to say I'm going to do this but then equally not beat yourself up about it if you find that
[00:35:25] [SPEAKER_01]: one day in you you don't manage to do it as long as you pick it up and and the
[00:35:30] [SPEAKER_01]: some incremental gains over the time are are really important I would also say that there is
[00:35:35] [SPEAKER_01]: a huge amount of content out there available on the web the challenge is navigating your way
[00:35:41] [SPEAKER_01]: through it and actually working out what's good and what's fit for purpose I think that's
[00:35:46] [SPEAKER_01]: that that's a real skill I think in terms of approaching it mind you has always been that
[00:35:52] [SPEAKER_01]: people can pretty much do anything if they set the mind to it it grows mindset of
[00:35:56] [SPEAKER_01]: I can't do it yet but at the same time you need to be realistic you need to recognize
[00:36:01] [SPEAKER_01]: that it will take time but you need to invest time in it you can't expect things to be
[00:36:07] [SPEAKER_01]: instantly amazing very quickly there's an element of mastery behind anything that you want to do
[00:36:13] [SPEAKER_01]: and that frankly does involve you having to do the hard yards on the grind before you can get
[00:36:17] [SPEAKER_01]: to that point of mastery so an element of tenacity and actually sticking at it I think
[00:36:23] [SPEAKER_00]: is fundamentally important as well. 100% with you and of course what we're talking about here
[00:36:29] [SPEAKER_00]: is this real pressure on every one of us to be in a state of continuous learning we all fail it no
[00:36:35] [SPEAKER_00]: matter what background we have and I'd love to turn the tables here and ask you where or how do
[00:36:40] [SPEAKER_01]: you self-educate yeah sure so I think for me there's two sides to this really there's two types
[00:36:47] [SPEAKER_01]: of learning that I tend to do there's a part which doesn't really require my full attention
[00:36:53] [SPEAKER_01]: so that's things such as listening to podcasts listening to audiobooks for example now this can
[00:37:01] [SPEAKER_01]: be done doubling up on the back of something else and James Clear in his book of Tommy Cabitz
[00:37:07] [SPEAKER_01]: he makes a really good point around this the fact that habits and learning should be a habit are
[00:37:12] [SPEAKER_01]: really hard to form and really easy to sort of put the wheels off the wagon and to stop doing
[00:37:18] [SPEAKER_01]: it so if you can piggyback off the back of an existing habit that makes life so much easier
[00:37:22] [SPEAKER_01]: so for example that type one learning not requiring my full attention I'll do that whilst
[00:37:27] [SPEAKER_01]: I'm commuting if I'm in the car I can listen to audiobooks if I'm on the train I can actually
[00:37:32] [SPEAKER_01]: view things as well you obviously if you're walking you're exclusive to stuff I guess if you
[00:37:37] [SPEAKER_01]: cycle into work and don't do this it's probably should be good but you know then things all
[00:37:41] [SPEAKER_01]: the adventacity is there they're crutching gling mowing the lawn and it's an opportunity
[00:37:46] [SPEAKER_01]: to be able to do this in the gym as well and then because you're doing those things already
[00:37:52] [SPEAKER_01]: you've pre-established that habit and you're able to leverage that to be able to do this
[00:37:57] [SPEAKER_01]: and the second part which frankly is much harder I'm not perfect at this by any stretch of the
[00:38:03] [SPEAKER_01]: imagination but this is a part that requires that absolute attention so it requires who in
[00:38:07] [SPEAKER_01]: front of a pc or a machine with complete focus studying something making notes really really
[00:38:14] [SPEAKER_01]: thinking hard about it practicing something harder to fit in but the only way to do it is
[00:38:20] [SPEAKER_01]: just you've got to make it non-negotiable you've got to schedule it in and have it as a thing as
[00:38:24] [SPEAKER_01]: important as anything else because sadly if you don't do that it will fall by the wayside because
[00:38:29] [SPEAKER_01]: there's always there's always the urgent but less important things that will demand your attention
[00:38:34] [SPEAKER_01]: and this stuff is really important and can make a huge difference in terms of people's
[00:38:38] [SPEAKER_01]: outcomes and where they go in life and what they achieve but it requires that self-discipline
[00:38:44] [SPEAKER_01]: because frankly no one's going to chase you down on it. What a beautiful moment to end on thank you
[00:38:49] [SPEAKER_00]: so much for sharing your insights today I'm sure we've left a lot of people listening with a lot
[00:38:54] [SPEAKER_00]: more questions maybe they want to continue this conversation so where is the best place for those
[00:38:59] [SPEAKER_00]: people listening to find you or your team online and you know find out more about anything
[00:39:04] [SPEAKER_01]: we talked about today so our website is 1010.com so that's t-e-n-1-0.com and relating to some of
[00:39:13] [SPEAKER_01]: the stuff we talked about today we specifically have an insight section within that if anybody
[00:39:17] [SPEAKER_01]: wants to call time we directly I'd absolutely welcome that I'm there on LinkedIn down as
[00:39:21] [SPEAKER_01]: ash as h course of g-a-w-t-h-o-r-p that no e on the end yeah should be quite easy to find out
[00:39:28] [SPEAKER_01]: from that many people with that name so you're not gonna have you're not gonna find 50 people with that
[00:39:32] [SPEAKER_00]: well I will add links to everything you mentioned there just to make it easy for people to find
[00:39:38] [SPEAKER_00]: everything and love discussing the importance of developing a human-centric perspective on AI
[00:39:44] [SPEAKER_00]: empowerment today demystifying AI instead of viewing it as AI magic and I just love what
[00:39:50] [SPEAKER_00]: you're doing here in driving this human-centric innovation and how the true power of AI lies
[00:39:56] [SPEAKER_00]: in its ability to augment human capabilities not replacement replace them it's such a powerful
[00:40:02] [SPEAKER_00]: message I think the key takeaway is by harnessing AI as this tool for empowerment organizations can
[00:40:09] [SPEAKER_00]: enable their employees to unlock their full potential and drive innovation within their domains
[00:40:14] [SPEAKER_00]: and again this is something we don't talk about enough so thanks so much for shining a
[00:40:18] [SPEAKER_01]: lie on this today Ash thanks for your mercy it's been absolute pleasure thank you so a big
[00:40:24] [SPEAKER_00]: thank you to Ash for talking about the transformative potential of AI in the workplace
[00:40:30] [SPEAKER_00]: and driving human-centric innovation and also talking about the importance of holistic development
[00:40:36] [SPEAKER_00]: and continuous learning for me I think Ash provided a roadmap to leveraging AI to empower
[00:40:42] [SPEAKER_00]: employees and ultimately foster a culture of innovation but what are your thoughts on
[00:40:48] [SPEAKER_00]: integrating AI into your professional environment how do you see it shaping the future of work
[00:40:54] [SPEAKER_00]: I'd love to hear your perspectives and invite you to join the conversation and share your
[00:40:59] [SPEAKER_00]: insights with me and you can do that by simply emailing me techblogwriteroutlook.com
[00:41:05] [SPEAKER_00]: twitter linked here in instagram just out neil cq's so thanks for tuning in and until
[00:41:12] [SPEAKER_00]: next time let's keep exploring the possibilities of technology speak with you all in the morning
[00:41:17] [SPEAKER_00]: bye for now

