3001: Faculty Why Every Business Must Become AI-Driven
Tech Talks DailyAugust 22, 2024
3001
36:1021.45 MB

3001: Faculty Why Every Business Must Become AI-Driven

In this episode of Tech Talks Daily, I'm joined by Dr. Marc Warner, the visionary founder of Faculty, a company dedicated to deploying safe AI systems that merge human expertise with artificial intelligence to deliver exceptional performance. With a rich background that spans over a decade of working with government agencies and leading brands, Marc is at the forefront of helping organizations harness the power of AI to make better decisions. Before founding Faculty, Marc was a Marie Curie Research Fellow in Physics at Harvard University, and his academic work has been featured in prestigious journals like Nature. Recently, he was one of the few London business leaders selected to attend the AI Safety Summit at Bletchley Park.

In our conversation, Marc makes a grounded and pragmatic case for the regulation of AI, emphasizing the importance of what he terms "mundane" or sensible AI regulation. He argues that while AI is often overhyped in the short term, it represents the most significant technological transformation of our time. Over the next decade, Marc believes that every business will need to evolve into a tech-driven AI business to survive and thrive. Those who lead in AI safety, he suggests, will not only protect their organizations but also set the standard for the industry, while those who remain on the sidelines risk falling behind.

Marc also shares insights into Faculty's innovative AI solutions, which have had a profound impact on various sectors. From enabling large-scale terrorist content moderation, demanded by the UK Prime Minister, to powering NHS pandemic forecasting and optimizing millions of call center interactions, Faculty's AI applications demonstrate the tangible benefits of integrating AI into business strategies. Marc stresses that AI should not be siloed but instead woven into the fabric of an organization, enabling better human decisions and driving measurable business outcomes.

Throughout our discussion, Marc underscores the need for humility in engaging AI experts and the boldness required to overcome organizational barriers. He advocates for aligning AI initiatives with core business strategies rather than pursuing disconnected AI strategies, which often lead to wasted resources and missed opportunities.

Join us as we explore the future of AI with Dr. Marc Warner and discuss how businesses can effectively integrate AI to not only stay competitive but also lead in this new era of technological advancement. How will your business adapt to the AI-driven future, and what steps can you take today to ensure you're on the right path? Tune in to discover Marc's expert perspective on navigating these challenges.

[00:00:03] [SPEAKER_00]: Our business is truly ready for the AI-driven future.

[00:00:09] [SPEAKER_00]: Well today I'm going to be joined by Dr Mark Warner and he is the CEO and co-founder of

[00:00:15] [SPEAKER_00]: a company called Faculty.

[00:00:17] [SPEAKER_00]: And this company is dedicated to deploying safe AI systems that are able to combine human

[00:00:24] [SPEAKER_00]: expertise with the latest cutting edge technology.

[00:00:27] [SPEAKER_00]: And Mark's journey from physicist specializing in quantum sensing at Harvard to leading one

[00:00:34] [SPEAKER_00]: of the UK's most innovative AI firms has equipped him with unparalleled insights into the transformative

[00:00:42] [SPEAKER_00]: power of AI.

[00:00:44] [SPEAKER_00]: So in our conversation today we're going to be talking about why Mark believes over

[00:00:48] [SPEAKER_00]: the next decade every business will need to embrace AI or risk extinction.

[00:00:54] [SPEAKER_00]: Big words.

[00:00:55] [SPEAKER_00]: And we'll also discuss his pragmatic views on AI regulation arguing for a more balanced

[00:01:02] [SPEAKER_00]: approach that fosters innovation while ensuring safety.

[00:01:05] [SPEAKER_00]: And Mark will also share how leaders who proactively engage with AI safety will not only safeguard

[00:01:11] [SPEAKER_00]: their organization but also position themselves at the forefront of this technological revolution.

[00:01:19] [SPEAKER_00]: So what does it take to integrate AI responsibly and effectively and how can you or your

[00:01:24] [SPEAKER_00]: business align your AI strategy with their core objectives to truly thrive in this emerging arena?

[00:01:33] [SPEAKER_00]: I just want to take a time out to express my gratitude to everyone who supports our mission

[00:01:38] [SPEAKER_00]: of delivering content every day to 140,000 listeners across 165 countries.

[00:01:44] [SPEAKER_00]: I'm grateful for the support that allows me to maintain this Daily Tech podcast.

[00:01:48] [SPEAKER_00]: And it's also an opportunity to talk about the fact that legacy managed file transfer tools are

[00:01:53] [SPEAKER_00]: outdated and lack the security that today's remote workforce often demands.

[00:01:58] [SPEAKER_00]: And companies that continue relying on that outdated technology, they can put their sensitive

[00:02:02] [SPEAKER_00]: data at risk. So if security breaches are keeping you up at night, why not sleep

[00:02:07] [SPEAKER_00]: soundly with KeiWorks MFT suite because their software security hardening includes

[00:02:12] [SPEAKER_00]: an ongoing bounty program, regular penetration testing.

[00:02:16] [SPEAKER_00]: And with one click appliance updates, staying secure has actually never been easy.

[00:02:21] [SPEAKER_00]: It doesn't have to be complicated.

[00:02:23] [SPEAKER_00]: So while other solutions might leave you vulnerable, KeiWorks offers military grade

[00:02:27] [SPEAKER_00]: protection, so there's no need to compromise on security.

[00:02:31] [SPEAKER_00]: So step into the future of secure managed file transfer with KeiWorks by visiting

[00:02:35] [SPEAKER_00]: KeiWorks.com to get started.

[00:02:38] [SPEAKER_00]: That's right, KeiWorks.com to get started today.

[00:02:41] [SPEAKER_00]: It's time to shift gears somewhat and to do

[00:02:43] [SPEAKER_00]: the person that you've all been waiting for today.

[00:02:47] [SPEAKER_00]: Well, buckle up and hold on tight as I beam your ears all the way to the UK where

[00:02:52] [SPEAKER_00]: you can join myself and Dr Mark Warner, CEO and co-founder of faculty,

[00:02:57] [SPEAKER_00]: where we'll talk about all this and much more.

[00:03:01] [SPEAKER_00]: So a massive warm welcome to the show, Mark.

[00:03:04] [SPEAKER_00]: Can you tell everyone listening a little bit who you are and what you do?

[00:03:08] [SPEAKER_01]: Well, thank you very much for inviting me on.

[00:03:10] [SPEAKER_01]: So I'm Mark Warner.

[00:03:12] [SPEAKER_01]: I'm the CEO of a company called Faculty.

[00:03:16] [SPEAKER_01]: We're an applied AI company, which means we

[00:03:19] [SPEAKER_01]: hate this very powerful technology that is currently being developed and put it

[00:03:25] [SPEAKER_01]: to use in real world situations.

[00:03:28] [SPEAKER_01]: So trying to make hospitals function better or companies, products cheaper

[00:03:34] [SPEAKER_01]: or government services or efficient for citizens, those kinds of things.

[00:03:40] [SPEAKER_00]: Well, it's a pleasure to have you join me on the podcast today.

[00:03:43] [SPEAKER_00]: You're right in the heart of this space and there's so much hype around AI at the moment.

[00:03:48] [SPEAKER_00]: I was reading a great Bill Gates quote a few days ago and he said,

[00:03:51] [SPEAKER_00]: we overestimate what we can achieve in one year, but underestimate what we can

[00:03:55] [SPEAKER_00]: achieve in 10. So I'm curious from everything that you're seeing,

[00:03:58] [SPEAKER_00]: how do you envision the role of AI evolving throughout businesses over

[00:04:02] [SPEAKER_00]: the next decade? And are there any particular steps that organisations

[00:04:06] [SPEAKER_00]: should actually be preparing for and taking right now to prepare for this transformation?

[00:04:11] [SPEAKER_00]: Because there's so much more to come than just gen AI, isn't it?

[00:04:15] [SPEAKER_01]: Hottily agreed.

[00:04:16] [SPEAKER_01]: So I do think that in some ways AI is over hyped in the short run.

[00:04:23] [SPEAKER_00]: Yeah.

[00:04:23] [SPEAKER_01]: And we are going to see a bunch of companies putting money into AI projects,

[00:04:31] [SPEAKER_01]: relying on people who were experts in crypto a year ago and experts in VR two years

[00:04:37] [SPEAKER_01]: before that, and those aren't going to go that well.

[00:04:41] [SPEAKER_01]: And so I do think there'll be a bit of a backlash against that.

[00:04:45] [SPEAKER_01]: However, all of this is on the sort of foundations of the most important

[00:04:52] [SPEAKER_01]: technological transformation of our era.

[00:04:57] [SPEAKER_01]: It'll be like the Internet.

[00:04:58] [SPEAKER_01]: There'll be, you know, there was a dot-com bubble, but 10, 20 years later,

[00:05:03] [SPEAKER_01]: the world's most valuable companies of all sort of Internet or Internet adjacent

[00:05:10] [SPEAKER_01]: companies. So what's actually the deep reality that underlies all of this?

[00:05:19] [SPEAKER_01]: Well, for 20 years, enterprises focused on digital transformation.

[00:05:24] [SPEAKER_01]: Some did this very successfully.

[00:05:27] [SPEAKER_01]: You know, Netflix went from post-DVDs to being a world leading streamer,

[00:05:31] [SPEAKER_01]: but lots failed.

[00:05:33] [SPEAKER_01]: Only about 10, 12 percent of companies sustained their digital transformation

[00:05:38] [SPEAKER_01]: goals for three plus years.

[00:05:42] [SPEAKER_01]: That happened for a lot of reasons.

[00:05:46] [SPEAKER_01]: I'm merely that

[00:05:50] [SPEAKER_01]: transformation is really a sort of what I call a socio technological problem.

[00:05:55] [SPEAKER_01]: It's not just a technology problem.

[00:05:56] [SPEAKER_01]: There's a real human aspect to it as well.

[00:06:01] [SPEAKER_01]: And now the area has begun over the next decade, every business must become

[00:06:06] [SPEAKER_01]: an AI business to go bust.

[00:06:09] [SPEAKER_01]: And so organizations are going to be faced with this sort of

[00:06:12] [SPEAKER_01]: socio technological problem of how do we build the new technology into our

[00:06:21] [SPEAKER_01]: core business while managing its risks and how do we create the capability

[00:06:26] [SPEAKER_01]: to decide our organization that enables us to go to actually put it to use.

[00:06:33] [SPEAKER_00]: So many great points there and even just looking at the Fortune 500 company,

[00:06:38] [SPEAKER_00]: I think there's something like 52 percent of the Fortune 500 have gone

[00:06:42] [SPEAKER_00]: since the year 2000, which is quite a sobering thought as well.

[00:06:46] [SPEAKER_00]: And one of the things I love doing on this podcast is trying to demystify

[00:06:51] [SPEAKER_00]: some of the phrases that we might hear now again.

[00:06:54] [SPEAKER_00]: And one that I keep keep hearing about is the concept of mundane AI regulation.

[00:06:59] [SPEAKER_00]: Can you tell me a bit more about that and why you think it's crucial for

[00:07:02] [SPEAKER_00]: ensuring the safe deployment of AI systems because again, huge talking point right now.

[00:07:09] [SPEAKER_01]: Absolutely.

[00:07:10] [SPEAKER_01]: So well, firstly, AI is a very broad term.

[00:07:15] [SPEAKER_01]: Nobody says biology is unequivocably, unequivocably good or bad.

[00:07:21] [SPEAKER_01]: And you don't talk about regulating biology.

[00:07:25] [SPEAKER_01]: You talk about regulating drugs or

[00:07:28] [SPEAKER_01]: crops or genetic engineering or IVF.

[00:07:33] [SPEAKER_01]: All of those sit under this broad heading of biology, but

[00:07:37] [SPEAKER_01]: to try and talk about them all at once is incredibly complicated.

[00:07:43] [SPEAKER_01]: I think it's useful in the domain of AI to kind of create this slightly

[00:07:48] [SPEAKER_01]: reimaginary spectrum from narrow AI to general AI.

[00:07:54] [SPEAKER_01]: And then on the narrow end, you've got AI models that have been used in production

[00:08:00] [SPEAKER_01]: for decades, things like handwriting recognition

[00:08:04] [SPEAKER_01]: that are specifically geared to do a specific problem and execute on that very effectively.

[00:08:13] [SPEAKER_01]: And then on the other end, the general end,

[00:08:16] [SPEAKER_01]: that's where the AI is supposed to be a sort of universal problem solver like humans.

[00:08:23] [SPEAKER_01]: Now, we don't currently understand how to build this and we don't currently

[00:08:26] [SPEAKER_01]: understand how to control it.

[00:08:28] [SPEAKER_01]: So at that end of the spectrum, I think we should be very cautious.

[00:08:33] [SPEAKER_01]: But at the mundane end of the spectrum, at the narrow end,

[00:08:38] [SPEAKER_01]: we actually should make sure we regulate effectively, but we should do it on the

[00:08:43] [SPEAKER_01]: basis of the domain that we're that it's being used in.

[00:08:48] [SPEAKER_01]: You know, I think the risks of whether some small

[00:08:54] [SPEAKER_01]: fashion company puts a red t-shirt or a blue t-shirt at the top of their website

[00:08:59] [SPEAKER_01]: are relatively low and the burden that they should be then

[00:09:04] [SPEAKER_01]: kind of regulatory burden that they should feel should be very small.

[00:09:08] [SPEAKER_01]: On the other hand, if an AI model is making sort of cancer diagnosis

[00:09:13] [SPEAKER_01]: or something that really has life or death impact, then

[00:09:16] [SPEAKER_01]: you know, there should be serious thought that goes into how to make that,

[00:09:22] [SPEAKER_01]: how to make that safe.

[00:09:25] [SPEAKER_01]: The other aspect to angle on all of this is that for much of Western Europe,

[00:09:30] [SPEAKER_01]: our productivity has been lightning for about 15 years, roughly since the financial crash.

[00:09:36] [SPEAKER_01]: And so we do need to figure out ways to grow our economies again.

[00:09:41] [SPEAKER_01]: And I think narrow AI offers a really important possibility for doing that.

[00:09:48] [SPEAKER_01]: And so I'd like to see us go fast on narrow AI with sensible domain specific

[00:09:54] [SPEAKER_01]: regulation and be cautious on general AI with more thought going in.

[00:10:04] [SPEAKER_00]: And I'm curious, I've got to ask you this question because you flirted with AGI

[00:10:10] [SPEAKER_00]: a little there is Google's, I think it was DeepMind co-founder Shane Lake.

[00:10:14] [SPEAKER_00]: He said in an interview that there's a 50% chance of AGI can be achieved by 2028.

[00:10:19] [SPEAKER_00]: And then I think Elon Musk predicted 2029.

[00:10:22] [SPEAKER_00]: And all kind of start up founders having bets against or for Musk and his predictions.

[00:10:28] [SPEAKER_00]: And it's even been brought forward in some circles.

[00:10:31] [SPEAKER_00]: Realistically, how far away are we from this?

[00:10:33] [SPEAKER_01]: Do you think the truth is nobody knows?

[00:10:36] [SPEAKER_01]: Yes.

[00:10:38] [SPEAKER_01]: So we still, you know, it what's fueled this recent excitement is the incredible

[00:10:46] [SPEAKER_01]: progress that these large language models and large multimodal models have made.

[00:10:51] [SPEAKER_01]: They've turned out to be more powerful than any of us really anticipated.

[00:10:57] [SPEAKER_01]: And so it's only natural to ask how far will they go?

[00:11:02] [SPEAKER_01]: And.

[00:11:03] [SPEAKER_01]: You know, science and the growth of knowledge is one of those really genuinely

[00:11:10] [SPEAKER_01]: unpredictable things.

[00:11:12] [SPEAKER_01]: And so.

[00:11:15] [SPEAKER_01]: Like there is a great deal of debate in the field.

[00:11:19] [SPEAKER_01]: I guess

[00:11:23] [SPEAKER_01]: Shane and people like him who point to the sort of 2028, 2029.

[00:11:28] [SPEAKER_01]: They're basically trying to judge off the back of Moore's law.

[00:11:34] [SPEAKER_01]: And like when we start to be able to build computers that kind of feel like they

[00:11:40] [SPEAKER_01]: have approximately similar computational power to a human brain.

[00:11:44] [SPEAKER_01]: And so that's that's the basis for those kinds of numbers,

[00:11:48] [SPEAKER_01]: which truthfully is as good a basis as any that we have.

[00:11:52] [SPEAKER_01]: Having said that, it can often take us a bit longer than we

[00:11:58] [SPEAKER_01]: anticipate to actually get things to work.

[00:12:01] [SPEAKER_01]: So I think it's probably going to be slightly slower than 2028, 2029,

[00:12:06] [SPEAKER_01]: but probably not 50 years away, maybe like five or 10 years after that.

[00:12:13] [SPEAKER_00]: And AI is such a vast topic.

[00:12:15] [SPEAKER_00]: AGI is a topic on its own for an entire episode of a podcast.

[00:12:19] [SPEAKER_00]: I'm going to further complicate things here and giving your background in

[00:12:24] [SPEAKER_00]: quantum sensing and academic research.

[00:12:27] [SPEAKER_00]: How do you see these fields intersecting with AI to drive innovation in the near

[00:12:33] [SPEAKER_00]: future, because we're starting to hear more and more about quantum alongside AI?

[00:12:37] [SPEAKER_00]: I'm just curious how you see all this panning up.

[00:12:40] [SPEAKER_01]: Well, I sort of voted with my feet on this.

[00:12:43] [SPEAKER_01]: So I was a quantum physicist, a research fellow at Harvard most recently.

[00:12:52] [SPEAKER_01]: And looking at the two fields,

[00:12:55] [SPEAKER_01]: I decided that I thought AI was going to be fundamentally more impactful.

[00:13:01] [SPEAKER_01]: And that impact was going to come faster than quantum.

[00:13:06] [SPEAKER_01]: So I actually moved fields to about 10 years ago to work in AI.

[00:13:15] [SPEAKER_01]: At the moment, we don't have like great evidence, at least as far as I've

[00:13:24] [SPEAKER_01]: followed the field that we actually are going to get important quantum speedups

[00:13:29] [SPEAKER_01]: for anything AI related, although that remains a possibility.

[00:13:33] [SPEAKER_01]: We definitely can't say that we won't.

[00:13:35] [SPEAKER_01]: But I don't think there's a killer application for quantum computers in AI yet.

[00:13:39] [SPEAKER_00]: And what would you say are some of the most impactful AI solutions faculty has

[00:13:44] [SPEAKER_00]: developed? Because I know you work with so many vast areas from government

[00:13:48] [SPEAKER_00]: agencies to leading brands.

[00:13:51] [SPEAKER_00]: So what have you seen here and what kind of outcomes have they achieved

[00:13:54] [SPEAKER_00]: with working with you?

[00:13:57] [SPEAKER_00]: Because this is where the magic happens, isn't it?

[00:13:58] [SPEAKER_00]: We're solving real problems and making measurable impact.

[00:14:01] [SPEAKER_01]: So I think maybe let's do a couple on the public sector and a couple on the private sector.

[00:14:07] [SPEAKER_01]: One of our early most famous results was that we

[00:14:11] [SPEAKER_01]: proved that we built the technology that proved AI terrorist content

[00:14:17] [SPEAKER_01]: moderation could happen at the web scale.

[00:14:21] [SPEAKER_01]: So this was under Prime Minister Theresa May

[00:14:23] [SPEAKER_01]: and we demonstrated that AI algorithms could detect

[00:14:29] [SPEAKER_01]: terrorist propaganda content with enough accuracy that you could start to

[00:14:34] [SPEAKER_01]: effectively, automatically moderate

[00:14:38] [SPEAKER_01]: internet video platforms, things like YouTube or whatever.

[00:14:42] [SPEAKER_01]: And off the back of that, the Prime Minister went to the UN and actually

[00:14:45] [SPEAKER_01]: demanded that the tech companies take down terrorist propaganda within two hours.

[00:14:50] [SPEAKER_01]: So that was a huge, huge positive for the world, I think,

[00:14:55] [SPEAKER_01]: and actually moved the needle in helping fight terrorism.

[00:15:01] [SPEAKER_01]: Then perhaps more recently, we built the technology that the NHS

[00:15:07] [SPEAKER_01]: used to understand the national pandemic response in the UK,

[00:15:13] [SPEAKER_01]: which has been credited with saving thousands of lives.

[00:15:17] [SPEAKER_01]: And that was predictions for every hospital across the entire country

[00:15:23] [SPEAKER_01]: for how many COVID cases they were likely to face in the next three weeks so that

[00:15:30] [SPEAKER_01]: they could figure out where to move patients or oxygen or ventilators or all

[00:15:35] [SPEAKER_01]: those kinds of things.

[00:15:37] [SPEAKER_01]: So those two really nice, important outcomes on the public sector.

[00:15:42] [SPEAKER_01]: In the private sector,

[00:15:47] [SPEAKER_01]: we've had equivalently important

[00:15:52] [SPEAKER_01]: commercial outcomes for organisations.

[00:15:54] [SPEAKER_01]: So we built technology that roots

[00:15:58] [SPEAKER_01]: millions, tens of millions of customer calls a year to the optimal call centre

[00:16:04] [SPEAKER_01]: agents so that people get to speak with the call centre agent

[00:16:09] [SPEAKER_01]: that is best for answering their problems.

[00:16:13] [SPEAKER_01]: Because I'm sure you like me have had a fair number of experiences of calling

[00:16:18] [SPEAKER_01]: the calling these hotlines and the ability of an agent is not to answer

[00:16:25] [SPEAKER_01]: a particular question can be a little bit lacking, but if you can find

[00:16:29] [SPEAKER_01]: exactly the right agent to help you with exactly the right problem,

[00:16:32] [SPEAKER_01]: flies through it makes the call shorter, which makes it cheaper for the company.

[00:16:38] [SPEAKER_01]: But most importantly, it makes it

[00:16:40] [SPEAKER_01]: better for the customer.

[00:16:43] [SPEAKER_01]: And another nice example.

[00:16:45] [SPEAKER_01]: So one of the world's leading scientific publishers, we help them

[00:16:51] [SPEAKER_01]: help their users to navigate through their scientific content.

[00:16:56] [SPEAKER_01]: So here's the problem I faced when I was a researcher that there are so

[00:17:02] [SPEAKER_01]: many papers out there.

[00:17:05] [SPEAKER_01]: It's hard to know what you should read and it's hard to know what you should

[00:17:10] [SPEAKER_01]: read next if you've just finished reading a paper.

[00:17:12] [SPEAKER_01]: And so we built them some technology

[00:17:17] [SPEAKER_01]: to build these personalized journeys like

[00:17:20] [SPEAKER_01]: personalized learning journeys through their scientific literature.

[00:17:24] [SPEAKER_01]: And that significantly increases the click through rate.

[00:17:27] [SPEAKER_01]: So really shows that users are engaging with the content more

[00:17:30] [SPEAKER_01]: effectively than they were before.

[00:17:33] [SPEAKER_00]: Well, I love about our conversation today is these are real world examples

[00:17:37] [SPEAKER_00]: that are moving the needle right now, not the future, but today.

[00:17:42] [SPEAKER_00]: Someone's food for thought for business leaders and for IT leaders.

[00:17:46] [SPEAKER_00]: They're biggest concerns around the pace of this technology is going to be

[00:17:50] [SPEAKER_00]: safety and responsibility and using AI ethically, etc.

[00:17:54] [SPEAKER_00]: So during the recent, I think it was the AI

[00:17:56] [SPEAKER_00]: safety summit at Bletchley Park.

[00:17:59] [SPEAKER_00]: What were the key takeaways and how do you think they will influence things

[00:18:03] [SPEAKER_00]: like the future of policy and regulation moving forward?

[00:18:07] [SPEAKER_00]: Because so important, it's easy to get distracted by the nice new shiny tech.

[00:18:11] [SPEAKER_00]: And we've seen what happens in the past with when Silicon Valley moves fast

[00:18:15] [SPEAKER_00]: and breaks things. So how do you say this going differently this time?

[00:18:20] [SPEAKER_01]: Well, I think probably the most important element was that just

[00:18:26] [SPEAKER_01]: holding the summit is progress.

[00:18:29] [SPEAKER_01]: So imagine if same energy had been applied to global warming before it became

[00:18:34] [SPEAKER_01]: a serious problem and countries started working to generate

[00:18:39] [SPEAKER_01]: international agreement on how we'd solve it.

[00:18:42] [SPEAKER_01]: We've been in a much better position today.

[00:18:45] [SPEAKER_01]: So I think that in itself is wonderful.

[00:18:50] [SPEAKER_01]: There's sort of the key takeaway is there was the Bletchley declaration

[00:18:54] [SPEAKER_01]: where companies, countries came together, actually companies and countries,

[00:18:58] [SPEAKER_01]: but were part of it.

[00:19:01] [SPEAKER_01]: But it was the countries that signed the declaration

[00:19:03] [SPEAKER_01]: to say that they recognized that AI had a lot to offer,

[00:19:09] [SPEAKER_01]: but it was important to safely, which is important.

[00:19:12] [SPEAKER_01]: Both China and America signing that is a big deal.

[00:19:16] [SPEAKER_01]: And then there was agreement for some of the Frontier Labs or the Frontier Labs

[00:19:21] [SPEAKER_01]: that were there to hand over the details of their models for inspection

[00:19:26] [SPEAKER_01]: pre-release by the government, in particular by the UK's AI Safety Institute,

[00:19:33] [SPEAKER_01]: which is a wonderful thing and really great progress.

[00:19:38] [SPEAKER_01]: Now, I think that it's important over time that the government starts to set

[00:19:44] [SPEAKER_01]: standards that can be more widely

[00:19:51] [SPEAKER_01]: used to test to so that everybody knows where they stand rather than just

[00:19:55] [SPEAKER_01]: a small number of Frontier Labs.

[00:19:58] [SPEAKER_01]: But I also think that it's absolutely sensible to start with the people at

[00:20:05] [SPEAKER_01]: the cutting edge of the field and then over time and with more experience,

[00:20:09] [SPEAKER_01]: roll those standards out more broadly.

[00:20:14] [SPEAKER_00]: And you mentioned climate change a few moments ago there.

[00:20:17] [SPEAKER_00]: And I think more recently, AI and sustainability are often mentioned in the same

[00:20:22] [SPEAKER_00]: sentence, but AI also has this well documented energy problem with the amount

[00:20:26] [SPEAKER_00]: it consumes, whether it be water for water, calling all just the energy to keep

[00:20:30] [SPEAKER_00]: those data centers going.

[00:20:32] [SPEAKER_00]: How do you see that conflict resolving itself?

[00:20:39] [SPEAKER_01]: I mean, I think it is going to

[00:20:45] [SPEAKER_01]: these large models do require large amounts of electricity.

[00:20:48] [SPEAKER_01]: Interesting.

[00:20:48] [SPEAKER_00]: Yeah.

[00:20:50] [SPEAKER_01]: But we do know how to generate that in a renewable way.

[00:20:55] [SPEAKER_01]: And so

[00:20:57] [SPEAKER_01]: for large organizations, particularly, I think it's important that they take

[00:21:03] [SPEAKER_01]: the environmental consideration seriously.

[00:21:06] [SPEAKER_01]: But given the amount of money that they're spending on

[00:21:12] [SPEAKER_01]: the chips for these systems, which is in the like 50 to 100 billion

[00:21:18] [SPEAKER_01]: a year for the tech giants,

[00:21:22] [SPEAKER_01]: actually just making sure they run off for a new electric electricity should be

[00:21:27] [SPEAKER_01]: like kind of easy and something they should just do.

[00:21:31] [SPEAKER_01]: Yeah.

[00:21:33] [SPEAKER_00]: Great point.

[00:21:34] [SPEAKER_00]: And as AI does become more integrated into business operations,

[00:21:38] [SPEAKER_00]: are there any other ethical considerations that leaders should be

[00:21:41] [SPEAKER_00]: prioritizing to ensure responsible and fair use of AI technologies?

[00:21:48] [SPEAKER_01]: Yeah.

[00:21:49] [SPEAKER_01]: Well, this is one of those nice cases when

[00:21:55] [SPEAKER_01]: the effectiveness of the solution and the ethics and safety actually go quite

[00:22:02] [SPEAKER_01]: hand in hand.

[00:22:04] [SPEAKER_01]: So,

[00:22:05] [SPEAKER_01]: you know, at the moment,

[00:22:08] [SPEAKER_01]: people talk about

[00:22:12] [SPEAKER_01]: AI replacing jobs.

[00:22:14] [SPEAKER_01]: Now, we've never seen that happen.

[00:22:16] [SPEAKER_01]: AI is not human level.

[00:22:19] [SPEAKER_01]: And so it's not capable of just wholly replacing aspects of

[00:22:27] [SPEAKER_01]: every job that a person does.

[00:22:31] [SPEAKER_01]: What it can do is take individual tasks

[00:22:35] [SPEAKER_01]: and it can automate a sort of a task.

[00:22:40] [SPEAKER_01]: And that basically lets the human divert their attention away from that generally

[00:22:46] [SPEAKER_01]: fairly mundane task to the more interesting things.

[00:22:50] [SPEAKER_01]: Now, the slight problem at the moment

[00:22:55] [SPEAKER_01]: is the way we're building these AI algorithms is just sort of

[00:23:01] [SPEAKER_01]: or the way many people are building these AI algorithms is

[00:23:05] [SPEAKER_01]: actually kind of broken.

[00:23:07] [SPEAKER_01]: They, well, let me give you an analogy.

[00:23:09] [SPEAKER_01]: Imagine you were hiring some people for your team, Neil.

[00:23:15] [SPEAKER_01]: And I said, well, you can hire,

[00:23:18] [SPEAKER_01]: we're going to hire you some people,

[00:23:22] [SPEAKER_01]: but they can't talk to their colleagues.

[00:23:24] [SPEAKER_01]: They can't talk to their boss and you can't tell them about your company

[00:23:29] [SPEAKER_01]: policies.

[00:23:31] [SPEAKER_01]: How well do you think that's going to work out for you?

[00:23:34] [SPEAKER_00]: I think I'll come crashing down very quickly.

[00:23:36] [SPEAKER_01]: Yeah.

[00:23:37] [SPEAKER_01]: And yet at the moment we're building AI models in a way

[00:23:43] [SPEAKER_01]: that is completely disconnected from other AI models.

[00:23:46] [SPEAKER_01]: So each one is kind of locally optimizing in a silo equivalent

[00:23:51] [SPEAKER_01]: of not talking to their colleagues.

[00:23:53] [SPEAKER_01]: We're making them so that they're

[00:23:55] [SPEAKER_01]: completely disconnected from business users.

[00:23:58] [SPEAKER_01]: You know, only the sort of ivory tower data scientists can really

[00:24:03] [SPEAKER_01]: interact with these AI models.

[00:24:05] [SPEAKER_01]: The actual business users can't.

[00:24:07] [SPEAKER_01]: So it's effectively not being able to talk to their boss again.

[00:24:10] [SPEAKER_01]: Another terrible thing.

[00:24:13] [SPEAKER_01]: And then we don't have very good governance around the AI model.

[00:24:18] [SPEAKER_01]: So you sort of train it and then you deploy it.

[00:24:21] [SPEAKER_01]: And you in most circumstances people then hope that it works.

[00:24:26] [SPEAKER_01]: Again, like, you know, hopefully if you hide in your employee,

[00:24:30] [SPEAKER_01]: you tell them what was and wasn't acceptable according to your company policies.

[00:24:34] [SPEAKER_01]: And you'd make sure those were reasonably strictly enforced.

[00:24:37] [SPEAKER_01]: We just don't do that at the moment.

[00:24:41] [SPEAKER_01]: So if business leaders want to prioritize responsible and fair use of AI

[00:24:47] [SPEAKER_01]: technologies, there's a relatively simple answer.

[00:24:51] [SPEAKER_01]: Which is.

[00:24:53] [SPEAKER_01]: Build it into your organizations to empower people to make better decisions.

[00:25:00] [SPEAKER_01]: If you do that and you ensure that, one,

[00:25:05] [SPEAKER_01]: it's connected up to the other parts of the organization so it's not siloed.

[00:25:09] [SPEAKER_01]: Two, it's interactive and business users can actually play with it,

[00:25:14] [SPEAKER_01]: understand it, use it to make decisions.

[00:25:16] [SPEAKER_01]: And three, properly governed.

[00:25:19] [SPEAKER_01]: Lots.

[00:25:20] [SPEAKER_01]: Basically, everything, everything works out fine.

[00:25:23] [SPEAKER_01]: If you don't do those things, it's exactly like kind of hiring those people into your

[00:25:28] [SPEAKER_01]: organization, it will be a fairly catastrophic failure fairly soon.

[00:25:31] [SPEAKER_00]: And I think that's such an important message to get out there because I do see

[00:25:36] [SPEAKER_00]: so many businesses rushing forward desperate to be a part of that narrative.

[00:25:40] [SPEAKER_00]: Being told just get it over the line rather than risk supporting behind.

[00:25:44] [SPEAKER_00]: And with all that in mind and everything that's going on right now,

[00:25:46] [SPEAKER_00]: how can businesses balance that drive for AI innovation with equally the need to

[00:25:52] [SPEAKER_00]: address potential risks and unintended consequences of AI deployment?

[00:25:57] [SPEAKER_00]: I suspect you've seen a few mistakes made in your time from clients.

[00:26:01] [SPEAKER_00]: I don't expect you to name any names, but what are they doing wrong?

[00:26:05] [SPEAKER_00]: And how can they stop doing it wrong?

[00:26:09] [SPEAKER_01]: Well, perhaps counterintuitively,

[00:26:12] [SPEAKER_01]: our for most businesses, our advice is don't have an AI strategy.

[00:26:19] [SPEAKER_01]: If you're so there are some businesses where their absolute core

[00:26:25] [SPEAKER_01]: competitive advantage is being fundamentally changed by AI.

[00:26:31] [SPEAKER_01]: So take something like outsourced customer service

[00:26:36] [SPEAKER_01]: call center management.

[00:26:39] [SPEAKER_01]: There you probably do need where your AI and business strategy are effectively

[00:26:46] [SPEAKER_01]: identical because you're tackling something so close to the core of what you're doing.

[00:26:51] [SPEAKER_01]: Outside of things like that, let's say if you're a pharma company

[00:26:57] [SPEAKER_01]: that wants to use AI to solve some of your critical problems, maybe something

[00:27:02] [SPEAKER_01]: like improving the efficiency of your clinical trial.

[00:27:07] [SPEAKER_01]: And then you have to think about the other

[00:27:07] [SPEAKER_01]: examples, actually, you should think of this as

[00:27:11] [SPEAKER_01]: you have a business strategy and your AI strategy should serve that.

[00:27:20] [SPEAKER_01]: If you spend too long going around collecting up 200 use possible use cases

[00:27:28] [SPEAKER_01]: of AI in your organization and then prioritize a laundry list of AI

[00:27:36] [SPEAKER_01]: applications independent of your business strategy, what you'll find is

[00:27:41] [SPEAKER_01]: you'll lose energy, your business will lose focus.

[00:27:46] [SPEAKER_01]: And what you eventually build will not affect the core KPIs that you actually care about.

[00:27:53] [SPEAKER_01]: A much better way of tackling that problem is to think hard about your business

[00:27:59] [SPEAKER_01]: strategy and then work with an expert, whether it's internally or externally in AI

[00:28:07] [SPEAKER_01]: and figure out how AI can play a part of that strategy.

[00:28:12] [SPEAKER_01]: Because you actually understand what your business needs to do better than anyone.

[00:28:17] [SPEAKER_01]: If you can find the right expert that can sort of help you

[00:28:21] [SPEAKER_01]: understand how AI can feed into that, well, then you'll be working on the things

[00:28:26] [SPEAKER_01]: that will actually matter to your business.

[00:28:28] [SPEAKER_01]: And so you will have the political energy, the available capital,

[00:28:33] [SPEAKER_01]: the sort of internal resources that actually execute on it.

[00:28:38] [SPEAKER_01]: If you don't do that, you just tend to end up with these POCs lying on the shelf.

[00:28:42] [SPEAKER_00]: And I'm curious in your experience from everything that you've seen now,

[00:28:46] [SPEAKER_00]: any particular qualities that distinguished leaders who successfully navigate

[00:28:50] [SPEAKER_00]: the integration of AI and companies from those who struggle with the transition.

[00:28:55] [SPEAKER_00]: Do you have you observed anything here?

[00:28:58] [SPEAKER_01]: Yeah, it's a very weird mix of sort of arrogance and humility

[00:29:07] [SPEAKER_01]: that seems to me like creates the most successful transformations.

[00:29:12] [SPEAKER_01]: And so it's the humility to really deeply engage with experts that know things that you don't.

[00:29:27] [SPEAKER_01]: Both on the AI side and the people on the front lines of the business,

[00:29:33] [SPEAKER_01]: whatever they like, they will know a great deal of the day to day reality of your business.

[00:29:39] [SPEAKER_01]: And if you can truly engage with those two sets of people while holding the business

[00:29:45] [SPEAKER_01]: strategy clearly in your head, you can develop some really powerful and innovative

[00:29:51] [SPEAKER_01]: solutions that solve the real problems that your frontline workers care about.

[00:29:56] [SPEAKER_01]: But then you need to kind of switch into this slightly more arrogant,

[00:30:02] [SPEAKER_01]: focused mode where there will always be problems.

[00:30:09] [SPEAKER_01]: There will always be blockers inside a real organization.

[00:30:13] [SPEAKER_01]: There will always be vetoes inside a real organization.

[00:30:17] [SPEAKER_01]: And you have to be able to push through those and make sure that the requisite

[00:30:24] [SPEAKER_01]: actions are taken even in slightly difficult circumstances.

[00:30:31] [SPEAKER_01]: And having the judgment to know when to be in one mode and when to be in the other

[00:30:38] [SPEAKER_01]: is also really hard.

[00:30:41] [SPEAKER_00]: Well, it's been such a fantastic conversation with you today.

[00:30:44] [SPEAKER_00]: I've learned so much.

[00:30:46] [SPEAKER_00]: I hope everyone listening will learn so much.

[00:30:48] [SPEAKER_00]: But I'm very conscious we've been looking forward talking about technology,

[00:30:51] [SPEAKER_00]: talking about how it will impact businesses, the problems we're solving.

[00:30:54] [SPEAKER_00]: But before I let you go, I'd love to find out a little more about you because

[00:30:58] [SPEAKER_00]: I think none of us are able to achieve any degree of success without a little help

[00:31:02] [SPEAKER_00]: along the way and very often it's someone that we're grateful towards.

[00:31:06] [SPEAKER_00]: Maybe saw something in us or maybe just inspired us and became our hero.

[00:31:11] [SPEAKER_00]: But is there anybody that you would like to give a shout out today that's had

[00:31:16] [SPEAKER_00]: a significant impact on your career?

[00:31:18] [SPEAKER_00]: Because as I said, none of us get here or where we meant to be without a

[00:31:22] [SPEAKER_00]: little help along the way.

[00:31:23] [SPEAKER_00]: Is there anybody that you'd like to shout out about there?

[00:31:26] [SPEAKER_01]: Oh, I mean, there's been so many people who've been so

[00:31:31] [SPEAKER_01]: transformationally helpful to me.

[00:31:34] [SPEAKER_01]: I think probably, you know,

[00:31:37] [SPEAKER_01]: there's a bunch of people that I'm personally very grateful for my PhD supervisor,

[00:31:41] [SPEAKER_01]: Gabe Epley, my postdoc supervisor at me at Ucobi.

[00:31:47] [SPEAKER_01]: And then some of our investors, Saul Klein and Mark Beath,

[00:31:51] [SPEAKER_01]: but perhaps the one person that was a really significant influence on me

[00:31:57] [SPEAKER_01]: was Charlie Munger, Warren Buffett's business partner.

[00:32:02] [SPEAKER_01]: Like when I was quite a lot younger,

[00:32:05] [SPEAKER_01]: I read poor Charlie Zalmanac and every one of his speeches that I could get

[00:32:11] [SPEAKER_01]: my hands on and throughout the course of faculty sort of steering to what we

[00:32:20] [SPEAKER_01]: thought Charlie Munger would think, as always, always being a really helpful guide for us.

[00:32:27] [SPEAKER_00]: Oh, well, I'll give a shout out to everybody there.

[00:32:31] [SPEAKER_00]: It's so important to get these stories out there.

[00:32:34] [SPEAKER_00]: And for anyone listening just want to find out more information about yourself,

[00:32:38] [SPEAKER_00]: your work, faculty, anything in between.

[00:32:41] [SPEAKER_00]: Where do you like to point everyone listening?

[00:32:44] [SPEAKER_01]: So if it's about faculty,

[00:32:47] [SPEAKER_01]: there's our website, faculty.ai.

[00:32:52] [SPEAKER_01]: I'm on Twitter at Mark 1 of 10,

[00:32:55] [SPEAKER_01]: although to be honest, I probably use it less than I perhaps should.

[00:33:00] [SPEAKER_01]: I tell you what, I'm starting to think about writing up

[00:33:04] [SPEAKER_01]: a bunch of the lessons that we've learned.

[00:33:07] [SPEAKER_01]: So now I've been doing this AI for 10 years.

[00:33:11] [SPEAKER_01]: And so if people would like to

[00:33:15] [SPEAKER_01]: get a hold of whatever form we decide to write that up,

[00:33:21] [SPEAKER_01]: if they drop us an email at info.

[00:33:25] [SPEAKER_01]: faculty.ai will make sure we're on the list whenever we eventually

[00:33:30] [SPEAKER_01]: later in the year bring that out.

[00:33:32] [SPEAKER_00]: That's a fantastic idea.

[00:33:34] [SPEAKER_00]: We will be your accountability partners.

[00:33:36] [SPEAKER_00]: We'll make sure you get that done.

[00:33:39] [SPEAKER_00]: But as I said, love chatting with you today.

[00:33:42] [SPEAKER_00]: You founded faculty to help organizations make better decision using human led AI.

[00:33:48] [SPEAKER_00]: That's one of the things I love about you.

[00:33:49] [SPEAKER_00]: And you were one of the very few London business leaders.

[00:33:52] [SPEAKER_00]: You were chosen to attend that AI safety summit at Bletchley Park.

[00:33:56] [SPEAKER_00]: And in today's interview, just love hearing more about how this grounded

[00:34:00] [SPEAKER_00]: and pragmatic case that you're raising for sensible and so-called

[00:34:04] [SPEAKER_00]: mundane AI regulation and showing your belief that this era of AI has begun.

[00:34:10] [SPEAKER_00]: And over the next decade, every business will not need to become just a tech

[00:34:13] [SPEAKER_00]: business, but an AI business.

[00:34:16] [SPEAKER_00]: So much food for thought.

[00:34:17] [SPEAKER_00]: I encourage everyone listening to send that email and stay up to speed with your

[00:34:21] [SPEAKER_00]: work. But thank you so much.

[00:34:23] [SPEAKER_00]: Thank you very much.

[00:34:24] [SPEAKER_00]: So as we stand on the cusp of this new technological era,

[00:34:29] [SPEAKER_00]: the insight shared by Dr.

[00:34:30] [SPEAKER_00]: Mark Warner today offered an almost powerful roadmap for businesses

[00:34:34] [SPEAKER_00]: navigating the complexities of AI.

[00:34:37] [SPEAKER_00]: And from his belief that every company

[00:34:39] [SPEAKER_00]: must evolve into an AI business to his core for a sensible,

[00:34:44] [SPEAKER_00]: domain specific, AI regulation, I think Mark has perfectly outlined a future

[00:34:49] [SPEAKER_00]: where those that embrace AI safety and innovation, they're the ones that are

[00:34:54] [SPEAKER_00]: going to lead the way.

[00:34:55] [SPEAKER_00]: And it was also great to hear real world stories of faculties work

[00:34:59] [SPEAKER_00]: from life-saving pandemic forecasting with the NHS to enabling ethical

[00:35:04] [SPEAKER_00]: AI deployment. And for me, all of this together demonstrates

[00:35:08] [SPEAKER_00]: the profound impact that well integrated AI can have.

[00:35:13] [SPEAKER_00]: But as you listening, wherever you are in the world,

[00:35:15] [SPEAKER_00]: as you reflect on today's discussion too, I want you to consider

[00:35:19] [SPEAKER_00]: how your business can not only adopt AI, but do so in a way that aligns

[00:35:23] [SPEAKER_00]: with your core strategies and values.

[00:35:25] [SPEAKER_00]: Because the future of business is intertwined with the future of AI.

[00:35:31] [SPEAKER_00]: And the big question is, will your company rise to that challenge or be left behind?

[00:35:36] [SPEAKER_00]: As always, let me know your thoughts.

[00:35:38] [SPEAKER_00]: TechBlogRite.outlook.com, connect with me on LinkedIn at Neil C Hughes.

[00:35:43] [SPEAKER_00]: But until next time, stay curious,

[00:35:47] [SPEAKER_00]: stay informed, keep exploring the ever evolving landscape of technology.

[00:35:51] [SPEAKER_00]: We'll do it all again tomorrow.

[00:35:53] [SPEAKER_00]: But thank you for listening as always.

[00:35:55] [SPEAKER_00]: And don't forget, I'll be back same time, same place tomorrow morning

[00:35:58] [SPEAKER_00]: with another guest. Speak to you all then.