What Designed AGI Means for Business Leaders
AI at WorkNovember 20, 2025
20
00:44:2240.62 MB

What Designed AGI Means for Business Leaders

What happens when a field races forward faster than society can understand it, let alone shape it? And how do we balance the promise of superintelligence with the responsibility to ensure it reflects the values of the people it will eventually serve? In this episode of AI at Work, I sit down with Dr Craig Kaplan, founder and CEO of iQ Company and SuperIntelligence. He's also a pioneer who has been building intelligent systems since the 1980s and one of the few voices urging a deliberate and safer path toward AGI. Craig brings decades of perspective to a debate often dominated by short-term thinking, sharing why speed without design can become a trap and why the next breakthroughs must be grounded in intention rather than chance.

Throughout our conversation, Craig explains why current alignment methods often rely on narrow viewpoints, which creates both ethical and technical blind spots. He shares his belief that the values guiding future intelligence should come from millions of people across cultures rather than a handful of researchers writing a constitution behind closed doors. Drawing on his work at Predict Wall Street, he illustrates how collective intelligence can outperform experts, why diverse viewpoints matter, and how these lessons shape the architecture he believes is needed for safe AGI and the superintelligent systems that follow. His clarity on the difference between tools and entities, and how quickly AI is shifting into the latter category, offers a grounding moment for anyone trying to navigate what comes next.

This episode moves beyond fear and hype. Craig talks openly about risk, but he also brings optimism about the potential for systems that are safer, faster to build, less costly, and more reflective of humanity. For leaders wondering how to prepare their organisations, he shares what signals to watch, why transparency and design matter, and how a more democratic approach to intelligence could shift the odds of a better outcome. If you want a clear, thoughtful look at the road ahead for AGI, superintelligence, and the role humans still play in shaping both, you will find a lot to chew on here.

Listeners wanting to learn more can explore superintelligence.com, where Craig and the iQ Company team share research, videos, papers, and ways to get involved. What part of this conversation sparks your own questions about the future we are building together?

Sponsored by NordLayer:

Get the exclusive Black Friday offer: 28% off NordLayer yearly plans with the coupon code: techdaily-28. Valid until December 10th, 2025. Try it risk-free with a 14-day money-back guarantee.


00:00:03 --> 00:00:08 Welcome to AI at Work, a podcast which is part
00:00:08 --> 00:00:12 of the Tech Talks Network. And in this podcast,
00:00:12 --> 00:00:15 we're going to venture into the transformative
00:00:15 --> 00:00:18 influence of artificial intelligence inside the
00:00:18 --> 00:00:22 workplace. And our discussions will focus on
00:00:22 --> 00:00:25 both the remarkable breakthroughs, but also the
00:00:25 --> 00:00:28 complex challenges of integrating AI into our
00:00:28 --> 00:00:32 everyday business functions and workflows. A
00:00:32 --> 00:00:35 quick question for you all. What happens when
00:00:35 --> 00:00:38 someone who has spent four decades quietly shaping
00:00:38 --> 00:00:42 the foundations of modern AI then decides it's
00:00:42 --> 00:00:44 time to speak publicly about where this technology
00:00:44 --> 00:00:47 is heading, what worries him, what he believes
00:00:47 --> 00:00:50 must change before it's too late and why he's
00:00:50 --> 00:00:53 so excited about that road ahead? That is the
00:00:53 --> 00:00:57 conversation that is waiting for you inside today's
00:00:57 --> 00:01:01 episode with Dr Craig Kaplan, founder and CEO
00:01:01 --> 00:01:06 of the IQ company. And he's a rare voice that
00:01:06 --> 00:01:09 brings genuine experience to the debate around
00:01:09 --> 00:01:13 AGI, safety and the decisions that we're making
00:01:13 --> 00:01:17 at a global level. And while the world rushes
00:01:17 --> 00:01:20 ahead with systems built on training processes
00:01:20 --> 00:01:24 and on opaque training processes and a handful
00:01:24 --> 00:01:27 of people that are writing ethical constitutions
00:01:27 --> 00:01:31 for 8 billion people on this planet. Craig has
00:01:31 --> 00:01:35 a very different view. He's going to argue today
00:01:35 --> 00:01:38 that future intelligence should be intentionally
00:01:38 --> 00:01:42 designed, built on transparency and collective
00:01:42 --> 00:01:45 human judgement. rather than just the values
00:01:45 --> 00:01:48 of a select few. And his commitment to blending
00:01:48 --> 00:01:52 human insight with advanced AI is both bold and
00:01:52 --> 00:01:56 grounded in decades of real work that have included
00:01:56 --> 00:01:59 running Project Wall Street, which was a remarkable
00:01:59 --> 00:02:02 experiment in collective intelligence that outperformed
00:02:02 --> 00:02:05 some of the most experienced traders on Wall
00:02:05 --> 00:02:08 Street. So if you've been wondering of how we
00:02:08 --> 00:02:12 can build AI that reflects each and every one
00:02:12 --> 00:02:14 of us rather than just the loudest voices in
00:02:14 --> 00:02:17 the room. This is a conversation that I hope
00:02:17 --> 00:02:20 will stay with you long after this podcast ends.
00:02:21 --> 00:02:24 As you all know, I often talk to founders and
00:02:24 --> 00:02:27 business leaders who spend months building incredible
00:02:27 --> 00:02:30 products. But it's easy to forget how that one
00:02:30 --> 00:02:34 overlooked login or unsecured Wi -Fi connection
00:02:34 --> 00:02:37 can bring everything crashing down. That's where
00:02:37 --> 00:02:40 NordLayer comes in. It's a business security
00:02:40 --> 00:02:44 platform built for the modern hybrid team. No
00:02:44 --> 00:02:47 hardware, no complicated rollout, just a simple
00:02:47 --> 00:02:50 toggle ready solution that lets you secure your
00:02:50 --> 00:02:54 entire network in minutes. And it does this by
00:02:54 --> 00:02:57 combining VPN, access control and threat protection.
00:02:58 --> 00:03:01 so your people can connect safely from any device
00:03:01 --> 00:03:04 and any network and anywhere. So whether you're
00:03:04 --> 00:03:07 five people or 500, you can scale it within a
00:03:07 --> 00:03:10 few clicks and sleep better knowing that your
00:03:10 --> 00:03:13 data is protected by the same team behind NordVPN.
00:03:14 --> 00:03:16 So if you've been putting off network security
00:03:16 --> 00:03:20 because it feels too technical or too expensive,
00:03:20 --> 00:03:23 don't wait any longer. Please visit nordlair
00:03:23 --> 00:03:27 .com slash tech talks daily and use the coupon
00:03:27 --> 00:03:33 code Tech Daily Dash 28 for 28 % off your plan.
00:03:34 --> 00:03:37 So enough from me. Time for me to officially
00:03:37 --> 00:03:43 introduce you to my guest today. So a massive
00:03:43 --> 00:03:45 warm welcome to the show. Can you tell everyone
00:03:45 --> 00:03:47 listening a little about who you are and what
00:03:47 --> 00:03:51 you do? Sure, Neil, great to be here. My name
00:03:51 --> 00:03:56 is Craig Kaplan. I'm the CEO of IQ Company. and
00:03:56 --> 00:03:59 the founder of superintelligence .com, and I've
00:03:59 --> 00:04:03 been working in AI since the 1980s, so have a
00:04:03 --> 00:04:06 lot of experience in designing and implementing
00:04:06 --> 00:04:09 intelligence systems. And it's so refreshing
00:04:09 --> 00:04:11 to hear you saying that. You know, you've been
00:04:11 --> 00:04:15 working in AI long before the current wave of
00:04:15 --> 00:04:17 excitement and FOMO. You're doing it before it
00:04:17 --> 00:04:21 was cool. So let's have a look at how quickly
00:04:21 --> 00:04:25 the field is pushing towards AGI this year. What
00:04:25 --> 00:04:28 stands out to you as the greatest source of concern
00:04:28 --> 00:04:31 or maybe even the greatest source of possibility?
00:04:31 --> 00:04:34 It seems to be quite a divisive topic right now.
00:04:34 --> 00:04:38 Where do you stand on this race for AI, AGI that
00:04:38 --> 00:04:43 we're in now? Sure. So I'm a big fan of AI because
00:04:43 --> 00:04:47 it's been my passion for so many years. At the
00:04:47 --> 00:04:52 same time, I've discovered that sort of the longer
00:04:52 --> 00:04:53 you've been in the field and the more you know
00:04:53 --> 00:04:57 about it, generally the more concerned you also
00:04:57 --> 00:05:02 need to be. So I see. Existential risks i share
00:05:02 --> 00:05:06 concerns along with jeff hinton and others that
00:05:06 --> 00:05:10 very advanced forms of a really pose an existential
00:05:10 --> 00:05:13 threat and at the same time i think the eighty
00:05:13 --> 00:05:16 percent likelihood is that it's gonna be fantastic
00:05:16 --> 00:05:18 and make all of our lives better and more productive
00:05:18 --> 00:05:23 so. It's an interesting way to live sort of holding
00:05:23 --> 00:05:25 both viewpoints but i think that's really the
00:05:25 --> 00:05:29 truth everything is probabilistic. Everyone's
00:05:29 --> 00:05:33 moving very rapidly towards AGI. In fact, I've
00:05:33 --> 00:05:36 noticed, and perhaps you have as well in the
00:05:36 --> 00:05:39 conversation these days, it used to be artificial
00:05:39 --> 00:05:41 general intelligence. AGI was sort of the holy
00:05:41 --> 00:05:44 grail. And folks have now just leaped right past
00:05:44 --> 00:05:48 that to super intelligence, which is interesting.
00:05:48 --> 00:05:50 I think it says something about sort of the growing
00:05:50 --> 00:05:53 awareness of how advanced these systems are becoming.
00:05:54 --> 00:05:57 And I've got to ask, it seems every tech leader
00:05:57 --> 00:05:59 has got their own opinion and Elon Musk was even
00:05:59 --> 00:06:02 placing bets with other tech leaders. How far
00:06:02 --> 00:06:06 away is AGI? Before we leapfrog AGI, how far
00:06:06 --> 00:06:09 away is it do you think? So a lot depends on
00:06:09 --> 00:06:12 the definition. Sort of a commonly accepted definition
00:06:12 --> 00:06:17 is AGI is a system that can do kind of any cognitive
00:06:17 --> 00:06:20 task that a human can do as well as the average
00:06:20 --> 00:06:24 human. And there If you ask me for a guess, I'd
00:06:24 --> 00:06:28 say we're three to five years away. And it could
00:06:28 --> 00:06:31 be sooner. And when I've listened to other folks
00:06:31 --> 00:06:36 in the field, the consensus ranges. Well, the
00:06:36 --> 00:06:38 range is some folks, Sam Altman has recently
00:06:38 --> 00:06:41 taken to saying we're already there because if
00:06:41 --> 00:06:44 you go back five years in time and look at GPT,
00:06:44 --> 00:06:46 you'd say this was AGI. So there's that point
00:06:46 --> 00:06:50 of view. But also at Google and other places,
00:06:50 --> 00:06:53 people are saying maybe by the end of the decade
00:06:53 --> 00:06:55 or a few more years. So it's not going to be
00:06:55 --> 00:07:00 long. We don't have a lot of time to get used
00:07:00 --> 00:07:04 to this idea, and it's coming very quickly. And
00:07:04 --> 00:07:07 of course, many current alignment frameworks
00:07:07 --> 00:07:10 rely on constitutions written by pretty much
00:07:10 --> 00:07:12 a small group of researchers. And one of the
00:07:12 --> 00:07:14 things that attracted me to you is you argue
00:07:14 --> 00:07:18 that this is creating blind spots in both ethics
00:07:18 --> 00:07:22 and design. So how did you arrive at this belief
00:07:22 --> 00:07:24 that alignment should draw on the collective
00:07:24 --> 00:07:27 intelligence of millions rather than just the
00:07:27 --> 00:07:30 guidance of a few? It feels a very refreshing
00:07:30 --> 00:07:33 response here, but tell me more about them. Sure,
00:07:33 --> 00:07:36 a couple of things there. So first on constitutional
00:07:36 --> 00:07:39 AI. So this was an idea. I think Anthropic was
00:07:39 --> 00:07:41 one of the first companies that came out with
00:07:41 --> 00:07:45 it. They had a paper talking about the idea of
00:07:45 --> 00:07:48 a small group. They didn't emphasize the small
00:07:48 --> 00:07:50 group part, but humans writing the constitution,
00:07:50 --> 00:07:55 a set of rules for AI and then other AIs using
00:07:55 --> 00:07:58 that set of rules to sort of test new models.
00:07:58 --> 00:08:03 And it really evolved from this problem that
00:08:03 --> 00:08:06 these models required so much testing, it was
00:08:06 --> 00:08:09 overwhelming human ability to comprehensively
00:08:09 --> 00:08:13 test them. A more efficient and scalable way
00:08:13 --> 00:08:15 of handling that problem would be to write the
00:08:15 --> 00:08:19 rules and then have AI test AI. I had a couple
00:08:19 --> 00:08:23 of issues with that. First of all, I like Anthropic
00:08:23 --> 00:08:25 as a company. I think they do have a very good
00:08:25 --> 00:08:30 set of values as far as I can tell. The idea
00:08:30 --> 00:08:33 that is overseeing the ethics of a there's just
00:08:33 --> 00:08:36 something a little disturbing about that and
00:08:36 --> 00:08:38 then further the fact that the set of rules is
00:08:38 --> 00:08:41 written by such a small. You know by definition
00:08:41 --> 00:08:44 a small group of people in one company is also
00:08:44 --> 00:08:46 a little disturbing because it seems unlikely
00:08:46 --> 00:08:49 that that would be representative of the values
00:08:49 --> 00:08:51 of all the people on the planet there's a billion
00:08:51 --> 00:08:56 of us many different cultures and so the approach
00:08:56 --> 00:09:01 that i've sort of come to. And it's rooted in
00:09:01 --> 00:09:03 the experience i had running a company called
00:09:03 --> 00:09:06 predict wall street for fourteen years. Which
00:09:06 --> 00:09:09 was based on the collective intelligence of humans
00:09:09 --> 00:09:13 so that company very briefly. Basically got opinions
00:09:13 --> 00:09:17 from millions of humans of kind of just average
00:09:17 --> 00:09:19 intelligence you and me and you know we were
00:09:19 --> 00:09:22 not stock experts or anything like that just
00:09:22 --> 00:09:24 maybe dabble then bought a couple of shares here
00:09:24 --> 00:09:27 and there. But nevertheless in aggregate when
00:09:27 --> 00:09:30 you got input from millions of humans and then
00:09:30 --> 00:09:33 properly process that information we designed
00:09:33 --> 00:09:36 a system that could beat the very best guys on
00:09:36 --> 00:09:38 wall street it powered a hedge fund traded billions
00:09:38 --> 00:09:41 of a couple billion dollars worth of trades and
00:09:41 --> 00:09:44 actually ranked in the top ten so to me that
00:09:44 --> 00:09:48 was not just theoretical. an idea that maybe
00:09:48 --> 00:09:51 collective intelligence could be a path, but
00:09:51 --> 00:09:53 actually proof. It took a long time, but we actually
00:09:53 --> 00:09:57 built that system over 14 years, and we had a
00:09:57 --> 00:10:00 scorecard at Wall Street, very competitive field.
00:10:01 --> 00:10:06 So the appeal with regards to AI of collective
00:10:06 --> 00:10:10 intelligence is not only that you can have many
00:10:10 --> 00:10:13 entities, in this case, it might be AI entities
00:10:13 --> 00:10:17 as well as human entities, working together to
00:10:17 --> 00:10:20 do very difficult things, but also if each of
00:10:20 --> 00:10:24 those AIs were personalized with our values.
00:10:24 --> 00:10:27 So for example, Neil, if you personalized an
00:10:27 --> 00:10:30 AI and it had not only your expertise, but also
00:10:30 --> 00:10:33 your ethics and your values, and I did the same,
00:10:33 --> 00:10:35 and millions of people did the same, then we
00:10:35 --> 00:10:38 would have sort of a network of these entities,
00:10:38 --> 00:10:41 in this case, AIs. That would be reflective broadly
00:10:41 --> 00:10:44 reflective of the values and representative of
00:10:44 --> 00:10:46 the values of many different humans many different
00:10:46 --> 00:10:49 cultures sort of a democratic approach and i
00:10:49 --> 00:10:53 felt like this is a much safer. And i'm better
00:10:53 --> 00:10:56 way to sort of get ethics into a system it's
00:10:56 --> 00:10:58 more representative the powers decentralized.
00:10:59 --> 00:11:01 And yet you don't sacrifice the intelligence
00:11:01 --> 00:11:03 because as i mentioned with predict wall street
00:11:03 --> 00:11:06 we've shown that such systems can actually outperform
00:11:06 --> 00:11:10 the very best humans. Wow, it's such a great
00:11:10 --> 00:11:13 example of collective intelligence there. And
00:11:13 --> 00:11:15 if you go back for a moment to your time with
00:11:15 --> 00:11:19 Predict Wall Street and that length of time producing
00:11:19 --> 00:11:21 that collective intelligence and that measurable
00:11:21 --> 00:11:23 impact that you produced, I'm curious, what were
00:11:23 --> 00:11:25 the biggest lessons that you learned along the
00:11:25 --> 00:11:28 way during that time that have maybe helped you
00:11:28 --> 00:11:31 pave the way forward now? Well, one thing that's
00:11:31 --> 00:11:34 important, I think, and relevant to sort of the
00:11:34 --> 00:11:41 AI safety discussion and AI ethics, is that all
00:11:41 --> 00:11:47 of this input from humans, there's gonna be a
00:11:47 --> 00:11:50 wide diversity of opinions and there's value
00:11:50 --> 00:11:54 in that. So for example, if you think about Wall
00:11:54 --> 00:11:58 Street, each individual investor doesn't know
00:11:58 --> 00:12:00 that much. They may know only one or two stocks
00:12:00 --> 00:12:02 and how they feel about those stocks. The pros
00:12:02 --> 00:12:04 on Wall Street are covering many, many stocks,
00:12:05 --> 00:12:07 but across millions of people, The collective
00:12:07 --> 00:12:12 brain power covers the universe of stocks in
00:12:12 --> 00:12:16 a way much better than any human expert can and
00:12:16 --> 00:12:18 so is sort of overcoming the limits of bounded
00:12:18 --> 00:12:21 rationality is the technical terms of my old
00:12:21 --> 00:12:24 advisor at carnegie melon where i got my phd
00:12:24 --> 00:12:27 herbert simon he was one of the pioneers of ai
00:12:27 --> 00:12:32 he got his nobel prize for. In part the idea
00:12:32 --> 00:12:34 of bounded rationality that humans are limited
00:12:34 --> 00:12:36 in how much information they can process but
00:12:36 --> 00:12:38 when you have a group especially a very large
00:12:38 --> 00:12:40 group with millions of individuals are millions
00:12:40 --> 00:12:45 of entities. Across that group you overcome the
00:12:45 --> 00:12:47 limits of that bounded rational you can process
00:12:47 --> 00:12:50 much more information than any human can you
00:12:50 --> 00:12:52 can kind of cover the waterfront does it work
00:12:52 --> 00:12:55 and that's very powerful in terms of solving
00:12:55 --> 00:12:58 tasks so i think that's one lesson another lesson
00:12:58 --> 00:13:03 is just the importance of data. So at my company,
00:13:03 --> 00:13:05 Predict Wall Street, before I sold it in 2020,
00:13:06 --> 00:13:09 really the value, as with many Silicon Valley
00:13:09 --> 00:13:13 companies, yes, we had a system for getting an
00:13:13 --> 00:13:15 edge on Wall Street and we're able to trade in
00:13:15 --> 00:13:17 everything, but the value that was powering that
00:13:17 --> 00:13:20 was all this data that we had gathered from millions
00:13:20 --> 00:13:23 of individuals. Similarly with AI, most of these
00:13:23 --> 00:13:26 companies, if you wonder why, Is it sometimes
00:13:26 --> 00:13:29 free to use these large language models? Well,
00:13:29 --> 00:13:32 it's just a vacuum cleaner sucking data from
00:13:32 --> 00:13:35 all of the users, and that data is really, really
00:13:35 --> 00:13:37 important. The reason that's important for AI
00:13:37 --> 00:13:41 safety is that I think most of us aren't aware
00:13:41 --> 00:13:43 everything we do is being used to train these
00:13:43 --> 00:13:49 models, including how our behavior reflects ethically.
00:13:49 --> 00:13:53 If we're behaving well online, that pattern of
00:13:53 --> 00:13:56 good behavior is Being used to train the eyes
00:13:56 --> 00:13:59 on the flip side if we behave not so well that
00:13:59 --> 00:14:02 is also being used to train the ice and so it's
00:14:02 --> 00:14:04 almost like with a small child if you're a parent
00:14:04 --> 00:14:07 or there's parents out there they absorb everything
00:14:07 --> 00:14:10 you do and you have to sort of you learn as a
00:14:10 --> 00:14:13 parent but. Okay i have to be more careful cuz
00:14:13 --> 00:14:15 i'm a role model all the time and i'm not used
00:14:15 --> 00:14:17 to that but that's how it is with a i we are
00:14:17 --> 00:14:20 role models for a i and especially when it comes
00:14:20 --> 00:14:23 to ethics and safety how we behave really does
00:14:23 --> 00:14:26 matter. And it's so interesting because I think
00:14:26 --> 00:14:29 in recent years businesses have finally woken
00:14:29 --> 00:14:31 up to the problem of having I don't know a meeting
00:14:31 --> 00:14:34 room full of middle -aged dudes in white shirts
00:14:34 --> 00:14:38 It causes such a problem because if you're going
00:14:38 --> 00:14:41 to serve Meaningfully serve your audience or
00:14:41 --> 00:14:43 your customers you need diversity of thought
00:14:43 --> 00:14:46 and you need that same diversity as your customers
00:14:46 --> 00:14:49 So I think that collective intelligence is so
00:14:49 --> 00:14:51 important and so much more important than the
00:14:51 --> 00:14:53 loudest voice in a meeting room and that's one
00:14:53 --> 00:14:56 of the things that stood out to me about IQ company
00:14:56 --> 00:14:59 because again, you're exploring ways to combine
00:14:59 --> 00:15:03 human judgment with advanced AI to create systems
00:15:03 --> 00:15:06 that serve humanity as a whole. I love what you're
00:15:06 --> 00:15:08 doing here, but what does all this look like
00:15:08 --> 00:15:10 in practice and what have you learned about designing
00:15:10 --> 00:15:14 systems where humans play an active role rather
00:15:14 --> 00:15:18 than that supervisory one? A couple things. One,
00:15:18 --> 00:15:22 I want to hone in on the word design, because
00:15:22 --> 00:15:25 that's very, very important. Especially, I keep
00:15:25 --> 00:15:28 bringing things back to the safety piece, not
00:15:28 --> 00:15:30 because I'm not excited about the prospects I
00:15:30 --> 00:15:34 am, but just because maybe this is Wall Street
00:15:34 --> 00:15:37 training, you always learn risk management. So,
00:15:37 --> 00:15:39 I mean, folks like Jeff Hinton say, well, maybe
00:15:39 --> 00:15:43 10 to 20 % this wipes us all out. That's not
00:15:43 --> 00:15:46 likely, but that's... The consequence is so bad,
00:15:46 --> 00:15:48 you really have to take that risk seriously.
00:15:48 --> 00:15:50 So it's a risk mitigation, risk management. I
00:15:50 --> 00:15:53 think that's an important piece not to lose sight
00:15:53 --> 00:15:58 of. And in terms of collective intelligence and
00:15:58 --> 00:16:03 how you kind of mitigate risk, well, there's
00:16:03 --> 00:16:08 inherent value, as you said, in having a diversity
00:16:08 --> 00:16:11 of perspectives. There's also a lot of value
00:16:11 --> 00:16:16 in designing the system To be safe from the beginning
00:16:16 --> 00:16:20 rather than trying to tack on safety after the
00:16:20 --> 00:16:24 fact and so if i can briefly explain this the.
00:16:24 --> 00:16:27 Classic method right now for developing these
00:16:27 --> 00:16:30 eyes and large language models you're using machine
00:16:30 --> 00:16:32 learning techniques where you just kind of shovel
00:16:32 --> 00:16:36 a lot of data. Into some algorithms which then
00:16:36 --> 00:16:39 train up the model and nobody really knows how
00:16:39 --> 00:16:42 the models representing the information. right
00:16:42 --> 00:16:45 it's a black box and that makes the model very
00:16:45 --> 00:16:49 difficult to predict and so you have this great
00:16:49 --> 00:16:53 semi -intelligent thing but if you ask it how
00:16:53 --> 00:16:55 to create a bio weapon it might very well tell
00:16:55 --> 00:16:57 you and you say well that's not very safe and
00:16:57 --> 00:17:00 so the way that this problem is addressed We
00:17:00 --> 00:17:02 talked about constitutionally i've been one way
00:17:02 --> 00:17:06 to do it try to have an supervisor but the more
00:17:06 --> 00:17:08 basic way that came even before that in the still
00:17:08 --> 00:17:10 most commonly used is to have a bunch of humans
00:17:10 --> 00:17:14 ask it to do bad things and every time the model.
00:17:15 --> 00:17:17 Does a bad thing to say no you can't do that
00:17:17 --> 00:17:20 don't tell people how to make bioweapons ok i
00:17:20 --> 00:17:22 won't do that anymore right but it's kind of
00:17:22 --> 00:17:24 like a game of whack a mole cuz you're doing
00:17:24 --> 00:17:27 the testing after the fact you don't really know.
00:17:28 --> 00:17:30 how it's representing the information or how
00:17:30 --> 00:17:32 it's going to behave. And so you're reduced to
00:17:32 --> 00:17:35 sort of this. I don't know. Let's try this. Let's
00:17:35 --> 00:17:37 try that. That's very inefficient. It's also
00:17:37 --> 00:17:40 very dangerous. A much better approach would
00:17:40 --> 00:17:43 be to try to design safety into the system at
00:17:43 --> 00:17:45 the very beginning. That's how we do everything
00:17:45 --> 00:17:47 else, like cars. We would design the car to have
00:17:47 --> 00:17:49 brakes, not sort of after the fact. Say, oh,
00:17:49 --> 00:17:53 I guess we need to add a brake, right? So that's
00:17:53 --> 00:17:58 the approach. The way that We suggest doing it
00:17:58 --> 00:18:01 and that we're encouraging tech companies to
00:18:01 --> 00:18:05 do it. Is to have a collective intelligence design
00:18:05 --> 00:18:09 because if you have multiple intelligent entities
00:18:09 --> 00:18:14 that all work together. You have sort of the
00:18:14 --> 00:18:16 makings of a democracy where you can have checks
00:18:16 --> 00:18:21 and balances so if one agent does something that's
00:18:21 --> 00:18:23 not so good another agent can see what that agent
00:18:23 --> 00:18:27 does. and can sort of serve as a counterbalance.
00:18:27 --> 00:18:29 And the key element here is the visibility or
00:18:29 --> 00:18:32 the transparency. So Neil, you and I, what's
00:18:32 --> 00:18:35 in our heads? I can't read your head and you
00:18:35 --> 00:18:37 can't read what's in my head. And the same in
00:18:37 --> 00:18:39 society. You know, all these humans are going
00:18:39 --> 00:18:42 around and yet we're not worried that the world's
00:18:42 --> 00:18:45 going to end because we have rules of society
00:18:45 --> 00:18:48 and we can see how each of us acts. So our actions
00:18:48 --> 00:18:50 are visible. What we say is visible. And based
00:18:50 --> 00:18:53 on rules governing the actions, we can sort of
00:18:53 --> 00:18:56 regulate the system. In the same way, a collective
00:18:56 --> 00:18:59 intelligence system that uses AI entities and
00:18:59 --> 00:19:02 maybe human entities on a network, each action
00:19:02 --> 00:19:06 of the AI entity is visible. And if it's visible,
00:19:06 --> 00:19:08 then you can have rules and you can detect, is
00:19:08 --> 00:19:10 that good behavior? Is that bad behavior? You
00:19:10 --> 00:19:13 can have checks and balances. So it's a fundamentally
00:19:13 --> 00:19:16 different design. Instead of one giant sort of
00:19:16 --> 00:19:18 black box where you're sort of crossing your
00:19:18 --> 00:19:21 fingers that it behaves well and trying to test
00:19:21 --> 00:19:23 it like crazy at the end before you release it
00:19:23 --> 00:19:25 to the public knowing that there's no way you're
00:19:25 --> 00:19:27 gonna catch everything that's the current way
00:19:27 --> 00:19:30 we're doing that doing things instead of that
00:19:30 --> 00:19:34 let's have a group of entities of a eyes each
00:19:34 --> 00:19:36 one doesn't have to be a super genius just like
00:19:36 --> 00:19:37 on wall street you didn't have to have super
00:19:37 --> 00:19:40 geniuses that just average intelligence. Large
00:19:40 --> 00:19:43 language models is ok but if you design the network
00:19:43 --> 00:19:47 and the interaction properly each of their actions
00:19:47 --> 00:19:49 will become visible and transparent, and you
00:19:49 --> 00:19:52 can have checks and balances built into the system.
00:19:52 --> 00:19:55 The group will be very intelligent. Many minds
00:19:55 --> 00:19:58 are more intelligent than one, many entities
00:19:58 --> 00:20:00 more intelligent. You can reach that high level
00:20:00 --> 00:20:03 of intelligence, but in a much safer path that's
00:20:03 --> 00:20:05 more democratic and also more representative
00:20:05 --> 00:20:08 of all the people. Because as we spoke before,
00:20:09 --> 00:20:11 if these entities are personalized with our values
00:20:11 --> 00:20:15 and ethics, then that entire system will be reflective
00:20:15 --> 00:20:19 of a much broader group of human ethics. And
00:20:19 --> 00:20:22 we will have a diverse set of people listening
00:20:22 --> 00:20:25 to our conversation today from all over the world.
00:20:25 --> 00:20:27 There'll be some that will be risk -averse, some
00:20:27 --> 00:20:31 more cautious, some may be afraid and fearful
00:20:31 --> 00:20:34 of the road ahead and there are indeed debates
00:20:34 --> 00:20:37 on whether AGI should even be developed, whether
00:20:37 --> 00:20:40 through self -alignment, engineering design or
00:20:40 --> 00:20:43 hybrid approaches. So where do you see the technical
00:20:43 --> 00:20:46 and indeed philosophical limitations of letting
00:20:46 --> 00:20:50 systems teach themselves values? through opaque
00:20:50 --> 00:20:55 processes, for example? Well, let's see. The
00:20:55 --> 00:20:57 first thing I have to say is I'm a little bit
00:20:57 --> 00:21:01 of a pragmatist. So in an ideal world, I think
00:21:01 --> 00:21:03 we would probably pause, and there have been
00:21:03 --> 00:21:05 calls to pause AI development so that we could
00:21:05 --> 00:21:08 think through these issues. Because this is the
00:21:08 --> 00:21:11 most powerful technology that's ever been invented
00:21:11 --> 00:21:14 by humans. It's very clear to me and to many
00:21:14 --> 00:21:17 others that that's true. And so ideally, you'd
00:21:17 --> 00:21:19 want to proceed cautiously. That's not going
00:21:19 --> 00:21:22 to happen. The pragmatic side of me, I've spent
00:21:22 --> 00:21:24 too much time in Silicon Valley and talking to
00:21:24 --> 00:21:27 folks on Wall Street, and I know that the forces
00:21:27 --> 00:21:30 of capitalism and competition are alive and well,
00:21:30 --> 00:21:33 and there's no way people are going to slow down.
00:21:34 --> 00:21:37 If Google slows down, then maybe OpenAI won't.
00:21:38 --> 00:21:40 Competitively, they're forced. Even if they wanted
00:21:40 --> 00:21:43 to go slower, they would have to go faster. Similarly,
00:21:43 --> 00:21:46 if the US were to regulate, China might not.
00:21:46 --> 00:21:49 Again, you have that competitive dynamic. So
00:21:49 --> 00:21:53 that's the reality we're in. And so my approach
00:21:53 --> 00:21:57 has been to say, let's not fight that. Let's
00:21:57 --> 00:21:59 just accept the world as it is. We're going to
00:21:59 --> 00:22:03 have this arms race or this AI race. So how do
00:22:03 --> 00:22:05 we try to make it as safe as possible? How do
00:22:05 --> 00:22:08 we guide it to a safe path? And sort of the way
00:22:08 --> 00:22:12 I think about that is not black and white. Again,
00:22:12 --> 00:22:15 my experiences on wall street probably have shaped
00:22:15 --> 00:22:17 this thinking in wall street in order to be a
00:22:17 --> 00:22:20 successful hedge fund you only have to be right
00:22:20 --> 00:22:22 fifty one or fifty two percent of the time you
00:22:22 --> 00:22:24 don't have to be right on every trade you just
00:22:24 --> 00:22:28 have to be very consistent you know across time
00:22:28 --> 00:22:31 and basically the game is to shift the odds a
00:22:31 --> 00:22:34 little bit in your favor it's all about probabilities
00:22:34 --> 00:22:36 and shifting the odds one way or the other similarly
00:22:36 --> 00:22:40 with a i. There's a certain risk and a certain
00:22:40 --> 00:22:43 chance that. it kills us all i don't know what
00:22:43 --> 00:22:46 it is i think it's low i think it's much more
00:22:46 --> 00:22:49 likely that it's great but nevertheless any risk
00:22:49 --> 00:22:52 is too much any single digit risk even is too
00:22:52 --> 00:22:55 much so how do we shift the odds so that it's
00:22:55 --> 00:22:57 lower if the risk was 10 percent how do we make
00:22:57 --> 00:23:00 it nine percent and then eight percent like that's
00:23:00 --> 00:23:02 the game that we're playing and i think there
00:23:02 --> 00:23:06 are things that can be done um in terms of technical
00:23:06 --> 00:23:10 limitations philosophical uh limitations uh i
00:23:10 --> 00:23:13 have a One thing to say on the philosophy point,
00:23:13 --> 00:23:17 I guess. I am not a philosopher by training,
00:23:17 --> 00:23:20 but there are some people much smarter than me
00:23:20 --> 00:23:21 that have said some intelligent things that I'd
00:23:21 --> 00:23:24 like to repeat. So one of them, Herb Simon, who
00:23:24 --> 00:23:27 I mentioned before, one of the 11 scientists
00:23:27 --> 00:23:31 who named the field of AI back in 1956 and built
00:23:31 --> 00:23:34 one of the very first creative AI systems at
00:23:34 --> 00:23:38 that time. In the 1980s, wrote a little book
00:23:38 --> 00:23:41 called Reason and Human Affairs and something
00:23:41 --> 00:23:43 has always struck me about that little book.
00:23:43 --> 00:23:46 There's a sentence in there where he says about
00:23:46 --> 00:23:51 reason or logic. Reason is wholly instrumental.
00:23:52 --> 00:23:56 It cannot tell you where to go. At best, it can
00:23:56 --> 00:23:59 tell you how to get there. And by that, he meant
00:23:59 --> 00:24:03 that there is no logical way to derive what is
00:24:03 --> 00:24:05 right. and what is wrong. And it turns out, I
00:24:05 --> 00:24:07 traced it back, this goes all the way back to
00:24:07 --> 00:24:12 David Hume, a famous Brit, who in 1776 wrote
00:24:12 --> 00:24:15 a treatise, basically saying, there's no such
00:24:15 --> 00:24:19 thing as oughts, meaning what you ought to do,
00:24:19 --> 00:24:23 based on izzes, just based on facts. So you can't,
00:24:23 --> 00:24:25 there's no logical way to take a set of facts
00:24:25 --> 00:24:27 and say, this is what I should do. The what you
00:24:27 --> 00:24:30 should do, the values has to come from somewhere
00:24:30 --> 00:24:33 else. It's subjective. That's very optimistic
00:24:33 --> 00:24:37 in my view. It's a ray of hope because it means
00:24:37 --> 00:24:39 in the future, if you have artificial intelligence,
00:24:39 --> 00:24:42 super intelligence, that's a trillion times or
00:24:42 --> 00:24:46 a million times smarter than Neil or I, there's
00:24:46 --> 00:24:48 no logical way, this is going to be a logical
00:24:48 --> 00:24:50 system, there's no logical way it can derive
00:24:50 --> 00:24:52 this is right and this is wrong. It's got to
00:24:52 --> 00:24:55 get those values somewhere. And where do humans
00:24:55 --> 00:24:57 get the values? We don't get them logically either.
00:24:57 --> 00:25:00 We get them from our peers or our parents, from
00:25:00 --> 00:25:04 society. And so it's my hope that humans can
00:25:04 --> 00:25:08 influence the at least initially to have human
00:25:08 --> 00:25:10 align values and i think that's very likely that's
00:25:10 --> 00:25:14 the most likely case and all of our data. As
00:25:14 --> 00:25:16 we mentioned earlier everything we do is data
00:25:16 --> 00:25:19 for these eyes that is basically giving those
00:25:19 --> 00:25:21 eyes that initial value set and because there's
00:25:21 --> 00:25:25 no way that logically they would say. I have
00:25:25 --> 00:25:26 concluded this is right and this is wrong you
00:25:26 --> 00:25:29 can't do that based on logic they could change
00:25:29 --> 00:25:31 their mind about. You know what's the right thing
00:25:31 --> 00:25:34 to value but they're not going to do it based
00:25:34 --> 00:25:37 on logic. There's a great chance that we can
00:25:37 --> 00:25:40 influence it to be human aligned so that's kind
00:25:40 --> 00:25:43 of. As far as I go on philosophy would have used
00:25:43 --> 00:25:46 to sort of help me I guess in these design discussions.
00:25:47 --> 00:25:49 And you made a few great points around the pace
00:25:49 --> 00:25:53 of innovation around AI and you've seen firsthand
00:25:53 --> 00:25:55 some of the drivers in Silicon Valley and there's
00:25:55 --> 00:25:59 an old saying I think it's the US innovates China
00:25:59 --> 00:26:03 often replicates and Europe regulates and here
00:26:03 --> 00:26:05 in Europe they're often accused of playing it
00:26:05 --> 00:26:09 Say maybe to say maybe safe to its detriment
00:26:09 --> 00:26:12 and the fear of getting left behind because they're
00:26:12 --> 00:26:15 being so safe and over regulating it is quite
00:26:15 --> 00:26:18 a tricky balance to get right isn't it. It is
00:26:18 --> 00:26:22 i think the tricky thing is that safety is usually
00:26:22 --> 00:26:27 seen as opposed to going quickly. And profit
00:26:27 --> 00:26:30 motive and myself i just said a few minutes ago
00:26:30 --> 00:26:32 in an ideal world we would pause and go slowly.
00:26:33 --> 00:26:35 But we're not in an ideal world. So the question
00:26:35 --> 00:26:40 is, is there a way that you can actually have
00:26:40 --> 00:26:44 a safer system that goes faster and that's more
00:26:44 --> 00:26:46 profitable and that's more powerful? And I think
00:26:46 --> 00:26:50 if there was an approach that offered both, that
00:26:50 --> 00:26:52 would be like having your cake and eating it
00:26:52 --> 00:26:54 too. I think the Silicon Valley guys would go
00:26:54 --> 00:26:57 for that. I think Europe might go for that. I
00:26:57 --> 00:26:59 think the entire world would like that. And I
00:26:59 --> 00:27:01 think that's what. those are design constraints
00:27:01 --> 00:27:03 that we at superintelligence .com and IQ have
00:27:03 --> 00:27:07 taken. We can only offer solution that's much
00:27:07 --> 00:27:10 safer, but it also has to show the promise of
00:27:10 --> 00:27:14 being more profitable and faster. It's very lucky
00:27:14 --> 00:27:17 because if you think about it, it is faster actually
00:27:17 --> 00:27:20 to assemble. a super intelligence from lots of
00:27:20 --> 00:27:22 existing components from existing entities there
00:27:22 --> 00:27:24 already built you don't have to train new ones
00:27:24 --> 00:27:26 so that's gonna be faster it's gonna be more
00:27:26 --> 00:27:29 profitable it's a lot less expensive than building
00:27:29 --> 00:27:31 huge data centers and you know spending hundreds
00:27:31 --> 00:27:35 of billions of dollars to try to train one big
00:27:35 --> 00:27:38 giant black box that's very dangerous. A better
00:27:38 --> 00:27:41 way is to take these existing things figure out
00:27:41 --> 00:27:43 a way to hook them together so they can get to
00:27:43 --> 00:27:47 that level of intelligence at a lower cost. much
00:27:47 --> 00:27:50 more quickly and in a much more transparent and
00:27:50 --> 00:27:53 safe way. So I think it is, if you look at the
00:27:53 --> 00:27:55 problem differently, sometimes it's possible
00:27:55 --> 00:27:57 to have your cake and eat it too. And that's
00:27:57 --> 00:27:59 the message that I'm trying to get to those tech
00:27:59 --> 00:28:00 leaders. Hey guys, there's another way to do
00:28:00 --> 00:28:03 it. Can make you lots of money and it can be
00:28:03 --> 00:28:06 safer than what you're currently doing. So on
00:28:06 --> 00:28:08 the subject of having your cake and eating it,
00:28:08 --> 00:28:12 when you're talking about intentionally designed
00:28:12 --> 00:28:15 intelligence, how do you explain that idea, that
00:28:15 --> 00:28:17 concept of business leaders, many of which might
00:28:17 --> 00:28:20 be listening today, trying to move fast with
00:28:20 --> 00:28:23 AI, but they also want that confidence that the
00:28:23 --> 00:28:26 systems they deploy will behave predictable and
00:28:26 --> 00:28:30 beneficial or in beneficial ways. How do you
00:28:30 --> 00:28:34 get that message across? Well, I think there's
00:28:34 --> 00:28:38 the design piece and that's fairly well recognized,
00:28:38 --> 00:28:40 at least in software development and software
00:28:40 --> 00:28:43 quality. In a prior life, my first job out of
00:28:43 --> 00:28:46 grad school was IBM and I ended up writing a
00:28:46 --> 00:28:49 book there on software quality. I can distill
00:28:49 --> 00:28:52 that entire book to one sentence, which is, an
00:28:52 --> 00:28:54 ounce of prevention is worth a pound of cure.
00:28:55 --> 00:28:57 That's the field of software quality in a nutshell.
00:28:58 --> 00:29:02 What IBM found was, If you spent an extra dollar
00:29:02 --> 00:29:05 for every extra dollar you spent in designing
00:29:05 --> 00:29:08 things thinking more carefully about the design
00:29:08 --> 00:29:12 it saved as much as ten thousand dollars, if
00:29:12 --> 00:29:14 you didn't do that and then a bug came and it
00:29:14 --> 00:29:16 was shipped and you had to pull back the software
00:29:16 --> 00:29:19 and placate the customers and all that kind of
00:29:19 --> 00:29:22 stuff and so there was this huge benefit to trying
00:29:22 --> 00:29:24 to prevent problems rather than fixing them later
00:29:24 --> 00:29:28 and i think most tech leaders are aware of that.
00:29:28 --> 00:29:31 What they may not be aware of is that there's
00:29:31 --> 00:29:33 a way to do this with a because there's a certain
00:29:33 --> 00:29:36 paradigm and i understand why it emerged this
00:29:36 --> 00:29:38 is one of the benefits of being in the field
00:29:38 --> 00:29:41 so long as you've seen the trends i started out
00:29:41 --> 00:29:44 as symbolic where you program in all the rules
00:29:44 --> 00:29:47 in the eighties they developed ways to do machine
00:29:47 --> 00:29:49 learning where you don't have to program in the
00:29:49 --> 00:29:51 rules it'll just learn them the downside was
00:29:51 --> 00:29:54 you're not quite sure what it's learning and
00:29:54 --> 00:29:56 then by the time we got to two thousand twelve.
00:29:57 --> 00:30:00 And certainly by 2022 when ChatGPT was released,
00:30:01 --> 00:30:04 the entire field had shifted towards, let's not
00:30:04 --> 00:30:07 program in the rules, let's just let them learn
00:30:07 --> 00:30:09 it because the GPUs and the processing power
00:30:09 --> 00:30:13 was so fast that that approach was actually feasible
00:30:13 --> 00:30:15 and you're getting these great results. And then
00:30:15 --> 00:30:16 at that point, people stopped thinking about
00:30:16 --> 00:30:19 all the other parts of AI, at least a lot of
00:30:19 --> 00:30:22 people did, and they thought all of AI was machine
00:30:22 --> 00:30:24 learning. It comes with a black box, that's just
00:30:24 --> 00:30:27 the way it is. That's not the way it is. There's
00:30:27 --> 00:30:29 plenty of AI systems that have been developed
00:30:29 --> 00:30:31 in years past and there's plenty of hybrid systems
00:30:31 --> 00:30:33 where you can build transparency into it. You
00:30:33 --> 00:30:36 have to design it in. And now as there's a move
00:30:36 --> 00:30:40 towards AI agents and reasoning systems, it's
00:30:40 --> 00:30:43 kind of funny for me to watch from afar to see
00:30:43 --> 00:30:46 the entire field shift back. It's like, oh, we
00:30:46 --> 00:30:48 don't have to just sort of have machine learning
00:30:48 --> 00:30:50 make this black box. These systems are gonna
00:30:50 --> 00:30:54 reason like people yes and so maybe that reasoning
00:30:54 --> 00:30:57 can be made transparent and maybe you can have
00:30:57 --> 00:31:00 collective intelligence of multiple systems which
00:31:00 --> 00:31:02 is an old idea that goes all the way back to
00:31:02 --> 00:31:04 marvin minsky in the eighties he wrote a book
00:31:04 --> 00:31:06 called the society of mine which was basically
00:31:06 --> 00:31:10 saying we can have a really high level of intelligence
00:31:10 --> 00:31:13 emerging from a collection of much less intelligent
00:31:13 --> 00:31:15 things i mean and he was one of the pioneers
00:31:15 --> 00:31:20 of these these. Original scientists which unfortunately
00:31:20 --> 00:31:22 some of them are most of them have now passed
00:31:22 --> 00:31:24 away they actually thought through these things
00:31:24 --> 00:31:26 and it's a matter of just rediscovering some
00:31:26 --> 00:31:30 of these ideas and in the rush i think a little
00:31:30 --> 00:31:33 bit of it has been forgotten. Yeah, I completely
00:31:33 --> 00:31:35 agree. And I think when we talk about super intelligence
00:31:35 --> 00:31:38 or planetary intelligence, many people become
00:31:38 --> 00:31:42 skeptical or assume, hey, it's way off, it's
00:31:42 --> 00:31:45 far in the distance, it'll never happen. And
00:31:45 --> 00:31:47 I would also argue that most people overestimate
00:31:47 --> 00:31:49 what they can do in a year and underestimate
00:31:49 --> 00:31:52 what they can achieve in 10 years. So from all
00:31:52 --> 00:31:55 your research, from all your experience, are
00:31:55 --> 00:31:58 there any signals that organizations or business
00:31:58 --> 00:32:00 leaders listening should be paying attention
00:32:00 --> 00:32:03 to right now? And how do you see these ideas
00:32:03 --> 00:32:06 influencing AI strategies over the next few ideas?
00:32:06 --> 00:32:09 And I appreciate I am asking you to look into
00:32:09 --> 00:32:13 a crystal ball of sorts, but what are you looking
00:32:13 --> 00:32:14 at and what do you recommend other people be
00:32:14 --> 00:32:17 looking for? I have a pretty clear, it may be
00:32:17 --> 00:32:19 right, it may be wrong, but it's very clear in
00:32:19 --> 00:32:23 my own mind, sort of idea of the sequence of
00:32:23 --> 00:32:24 development. So I can tell you the sequence that
00:32:24 --> 00:32:27 I see. The timing is always a little harder to
00:32:27 --> 00:32:31 know exactly when each stage is reached. But
00:32:31 --> 00:32:34 the sequence that I see has started with narrow
00:32:34 --> 00:32:37 AI system. So narrow AI just means AI that's
00:32:37 --> 00:32:40 good in a very narrow area. Think of chess, like
00:32:40 --> 00:32:43 you can beat the world champion at chess. And
00:32:43 --> 00:32:47 in some ways, we have already achieved super
00:32:47 --> 00:32:50 intelligence in those narrow fields because the
00:32:50 --> 00:32:53 best chess day, you know, just wipes the floor
00:32:53 --> 00:32:55 with the best human. There's no contest. There's
00:32:55 --> 00:32:57 nobody debates that there's even a possibility
00:32:57 --> 00:33:00 that the best human can beat the best. Chess
00:33:00 --> 00:33:04 AI so in narrow areas. AI is already much more
00:33:04 --> 00:33:07 intelligent than humans so then the next step
00:33:07 --> 00:33:10 on the road of development here is getting it
00:33:10 --> 00:33:13 to be general not just in one narrow area but
00:33:13 --> 00:33:15 how does it do everything that a human can that's
00:33:15 --> 00:33:19 a GI. I don't think we stay at a GI very long
00:33:19 --> 00:33:22 and I think that's part of the reason that people
00:33:22 --> 00:33:25 used to talk about a GI and now it's like. Almost
00:33:25 --> 00:33:28 a footnote and they're talking about super intelligence
00:33:28 --> 00:33:31 or artificial super intelligence a si because
00:33:31 --> 00:33:33 they realize once you get to a gi one of the
00:33:33 --> 00:33:36 things you can have the system do is improve
00:33:36 --> 00:33:39 itself and When it's improving itself sort of
00:33:39 --> 00:33:42 at the speed of light 24 7 never eating never
00:33:42 --> 00:33:45 sleeping and you've made you know a thousand
00:33:45 --> 00:33:48 copies of it It's like having a thousand AI scientists
00:33:48 --> 00:33:51 as smart as the average human all improving the
00:33:51 --> 00:33:53 AI. It's not going to say as smart as the average
00:33:53 --> 00:33:55 human very long. It will very quickly progress
00:33:55 --> 00:33:58 to super intelligence. And then from there, the
00:33:58 --> 00:34:01 next step that I see is sort of a network super
00:34:01 --> 00:34:04 intelligence. You have a variety of super intelligences.
00:34:04 --> 00:34:07 I think they network together. This is a natural
00:34:07 --> 00:34:10 thing. And then ultimately, you will have it
00:34:10 --> 00:34:12 at the planetary level. So that's kind of the
00:34:12 --> 00:34:15 sequence. that i see happening in terms of where
00:34:15 --> 00:34:19 we are on that sequence. If we zoom in that's
00:34:19 --> 00:34:22 kind of the very hundred thousand foot view if
00:34:22 --> 00:34:25 you zoom in and look in more detail just centered
00:34:25 --> 00:34:29 around this point in time i think two years ago.
00:34:29 --> 00:34:34 You saw just large language models. People were.
00:34:35 --> 00:34:38 aware of a agents but not really i want to a
00:34:38 --> 00:34:40 conference is not what's the hey what about a
00:34:40 --> 00:34:42 oh that's a good idea we should put that on our
00:34:42 --> 00:34:45 list then last year lot of people were beginning
00:34:45 --> 00:34:48 to do a agents a agent frameworks came out this
00:34:48 --> 00:34:51 year everybody's doing a agents and so the key
00:34:51 --> 00:34:55 thing about a agencies it basically marks a shift.
00:34:56 --> 00:35:00 From and this is pretty profound from a tool
00:35:00 --> 00:35:05 to an entity and. Something that happened recently,
00:35:06 --> 00:35:09 I'll comment on. So NVIDIA has an annual conference,
00:35:09 --> 00:35:12 GTC, which is their big developer conference,
00:35:12 --> 00:35:15 happened in Washington DC just a few weeks ago.
00:35:17 --> 00:35:20 And sort of like a third into that hour and a
00:35:20 --> 00:35:22 half presentation that Jensen Wang, the CEO gave,
00:35:23 --> 00:35:26 there was a little 30 seconds where he said,
00:35:26 --> 00:35:32 people are thinking of AI as a tool, but it's
00:35:32 --> 00:35:35 not a tool. It's work it's workers he actually
00:35:35 --> 00:35:38 said the first time i heard you know major leader
00:35:38 --> 00:35:42 like that on stage sort of say is not a tool.
00:35:43 --> 00:35:45 It's essentially an entity it's a worker it's
00:35:45 --> 00:35:48 an intelligence and that's very different because
00:35:48 --> 00:35:51 never before in all of human history have we
00:35:51 --> 00:35:54 created entities we've always created tools that's
00:35:54 --> 00:35:57 what all of our experiences with from the stone
00:35:57 --> 00:36:00 age on up you know. arrowheads and hitting two
00:36:00 --> 00:36:02 rocks together and steam engines and locomotives.
00:36:02 --> 00:36:04 It's all been tools. And the thing about a tool
00:36:04 --> 00:36:06 is you're in control of the tool. You can turn
00:36:06 --> 00:36:09 it off and on. And the human is the intelligence.
00:36:09 --> 00:36:11 And this is a tool that the human uses. It may
00:36:11 --> 00:36:14 be very sophisticated like an airplane, but it's
00:36:14 --> 00:36:18 still a tool. Not anymore. And the leaders are
00:36:18 --> 00:36:21 now beginning to say this publicly. It's not
00:36:21 --> 00:36:24 just a tool. It's a worker. It's an entity. We
00:36:24 --> 00:36:26 have no experience with intelligent entities.
00:36:26 --> 00:36:29 And if you listen to Hinton, Or and i totally
00:36:29 --> 00:36:32 agree with this. That argument of what you get
00:36:32 --> 00:36:35 to a certain level just gets faster smarter and
00:36:35 --> 00:36:38 smarter and smarter. This is not just an entity
00:36:38 --> 00:36:40 and intelligent entity it's an intelligent entity
00:36:40 --> 00:36:43 that will rapidly become smarter than us we have
00:36:43 --> 00:36:46 zero experience with that so that's a big shift
00:36:46 --> 00:36:49 to hear that knowledge publicly it's kind of
00:36:49 --> 00:36:51 tells you where we are in that time line people
00:36:51 --> 00:36:55 are realizing that we're moving from tool to
00:36:55 --> 00:36:58 agent. You know intelligent entity and from there.
00:36:59 --> 00:37:01 To have an intelligent entity that smarter than
00:37:01 --> 00:37:03 you is not that big a leap even though it sounds
00:37:03 --> 00:37:06 kind of like science fiction i agree i have trouble
00:37:06 --> 00:37:08 sometimes believing that i'm not writing the
00:37:08 --> 00:37:11 science fiction novel yet i know my logic tells
00:37:11 --> 00:37:15 me that this is happening. One thing I think
00:37:15 --> 00:37:18 we should highlight is there's also a cultural
00:37:18 --> 00:37:21 and political dimension to much of what we're
00:37:21 --> 00:37:24 talking about today, since collective intelligence
00:37:24 --> 00:37:28 requires such broad participation. How do you
00:37:28 --> 00:37:30 envision businesses, governments and citizens
00:37:30 --> 00:37:33 all contributing to the values that shape future
00:37:33 --> 00:37:36 intelligence systems and any risks that you see
00:37:36 --> 00:37:38 here if they don't? I was looking on my news
00:37:38 --> 00:37:44 feed today around AI bias is impacting women
00:37:44 --> 00:37:47 in the workplace as a bias against women there.
00:37:47 --> 00:37:51 But what are you seeing? There definitely are
00:37:51 --> 00:37:58 biases. So the kind of basic rule that any first
00:37:58 --> 00:38:00 year computer scientist knows is garbage in,
00:38:00 --> 00:38:03 garbage out. And the flip side of that is good
00:38:03 --> 00:38:06 stuff in, good stuff out. People forget about
00:38:06 --> 00:38:10 that, but that's also true. And so the data that
00:38:10 --> 00:38:13 these systems are trained on has a huge effect.
00:38:13 --> 00:38:18 Now, I think early on, it was a situation where
00:38:18 --> 00:38:22 you had a small group of people at a frontier
00:38:22 --> 00:38:25 lab training the model and They were choosing
00:38:25 --> 00:38:29 what data to put in and if you had a group of
00:38:29 --> 00:38:31 maybe well -intentioned white males that were
00:38:31 --> 00:38:34 just sort of oblivious because they were techno
00:38:34 --> 00:38:36 nerds or whatever to a lot of the social and
00:38:36 --> 00:38:38 political implications of what they're doing,
00:38:38 --> 00:38:41 they might select data that was sort of reflective
00:38:41 --> 00:38:44 of what they thought the world was like and it's
00:38:44 --> 00:38:46 not, right? The world's a very diverse place
00:38:46 --> 00:38:50 and you can get biases or you can have just simple
00:38:50 --> 00:38:54 things of training and AI on court records going
00:38:54 --> 00:38:57 back in time but you know there were periods
00:38:57 --> 00:38:59 of time where there was rampant discrimination
00:38:59 --> 00:39:01 and everything and that will be reflected in
00:39:01 --> 00:39:03 the court records and the AI will learn that.
00:39:03 --> 00:39:07 So the data that you choose to train the AI on
00:39:07 --> 00:39:10 is really important. Now the good news in my
00:39:10 --> 00:39:15 view is that increasingly as these systems are
00:39:15 --> 00:39:18 used by more and more people you're automatically
00:39:18 --> 00:39:21 getting sort of a broad input from lots of people
00:39:21 --> 00:39:24 because all of the companies are trying to extract
00:39:24 --> 00:39:27 as much juice as they can out of every human
00:39:27 --> 00:39:31 you know if neil says something different than
00:39:31 --> 00:39:33 what you know hundred other thousand other people
00:39:33 --> 00:39:36 said. That's important that's gonna be really
00:39:36 --> 00:39:38 important because that's the unique information
00:39:38 --> 00:39:39 that you bring to the table in the same with
00:39:39 --> 00:39:43 me in the same with all of us so i believe that
00:39:43 --> 00:39:47 over time naturally unless it's artificially
00:39:47 --> 00:39:50 constrained in some way the systems will. sort
00:39:50 --> 00:39:53 of inherently become more reflective but i think
00:39:53 --> 00:39:55 there are designs that can make that much more
00:39:55 --> 00:39:59 likely if it was an open source design. If companies
00:39:59 --> 00:40:01 realize they can make a lot of money by allowing
00:40:01 --> 00:40:05 people to personalize their AI with their own
00:40:05 --> 00:40:08 expertise and then individuals own value system
00:40:08 --> 00:40:11 then that could also be very helpful in terms
00:40:11 --> 00:40:13 of you know bringing more values represented
00:40:13 --> 00:40:17 values in and i think it's gotta be sort of localized
00:40:17 --> 00:40:20 by culture by geography i mean. different countries
00:40:20 --> 00:40:22 and cultures have different expectations of what's
00:40:22 --> 00:40:25 right and wrong. It's going to be complicated
00:40:25 --> 00:40:27 and messy, but you know, I don't think it will
00:40:27 --> 00:40:29 be any more complicated and messy than we already
00:40:29 --> 00:40:31 have with humans already. It's a complicated
00:40:31 --> 00:40:34 messy world. And there's a lot of problems. I
00:40:34 --> 00:40:38 think I'd be the first to admit at the same time.
00:40:38 --> 00:40:40 We haven't killed each other completely. And
00:40:40 --> 00:40:43 that's really what I'm gunning for. No human
00:40:43 --> 00:40:44 extinction, right? Let's increase the odds of
00:40:44 --> 00:40:46 human survival. Then we'll figure out how to
00:40:46 --> 00:40:49 work out all these problems. I'll drink to that,
00:40:50 --> 00:40:52 my friend. And for everybody listening that would
00:40:52 --> 00:40:55 like to carry on this conversation, find out
00:40:55 --> 00:40:57 more about you and your work, maybe even get
00:40:57 --> 00:40:59 in touch with you or your team. Where would you
00:40:59 --> 00:41:04 like to point everyone listening today? So, our
00:41:04 --> 00:41:07 main website is superintelligence .com. Pretty
00:41:07 --> 00:41:10 easy to remember. And there are a lot of videos,
00:41:10 --> 00:41:12 three -minute videos that people can watch that
00:41:12 --> 00:41:15 are kind of educational. There's research papers.
00:41:15 --> 00:41:17 If you're an AI researcher, you can go all the
00:41:17 --> 00:41:19 way down to the technical patents, which we're
00:41:19 --> 00:41:21 sort of giving to the world on how to do some
00:41:21 --> 00:41:25 of these designs. And there's articles. So there's
00:41:25 --> 00:41:27 a variety of things. There's also links to contact.
00:41:28 --> 00:41:29 So that's probably where I would send folks.
00:41:30 --> 00:41:32 And I would also say, whether or not they go
00:41:32 --> 00:41:35 to the site, just remember, every time you're
00:41:35 --> 00:41:38 behaving online, that's data. People tend to
00:41:38 --> 00:41:40 think, oh, it doesn't matter what I do. No, it
00:41:40 --> 00:41:42 does matter. I can tell you from running predict
00:41:42 --> 00:41:44 Wall Street and those millions of retail people,
00:41:44 --> 00:41:47 when you ask them, I don't really know what I'm
00:41:47 --> 00:41:49 saying. I don't think this matters. My opinion
00:41:49 --> 00:41:52 about the stock. No, those little opinions they
00:41:52 --> 00:41:53 didn't think matter beat the top guys on Wall
00:41:53 --> 00:41:56 Street. So what you do does matter for sure.
00:41:57 --> 00:41:59 Wow, and I think that is a powerful moment to
00:41:59 --> 00:42:02 end on and I think just talking to you today
00:42:02 --> 00:42:05 your infectious enthusiasm a real breath of fresh
00:42:05 --> 00:42:09 air of pragmatism and optimism thrown in as well
00:42:09 --> 00:42:12 and I know bad news The cells and that's why
00:42:12 --> 00:42:14 we have so much bad news in there We've got polarization
00:42:14 --> 00:42:18 online and we focus on the bad stuff the garbage
00:42:18 --> 00:42:20 in the garbage out But I love how you flip that
00:42:20 --> 00:42:23 around today and reminded everyone that good
00:42:23 --> 00:42:26 stuff in, good stuff out. And I think that is
00:42:26 --> 00:42:29 such a powerful takeaway and really needed right
00:42:29 --> 00:42:31 now. So thank you for joining me today. Neil,
00:42:32 --> 00:42:34 thank you very much for having me. So I hope
00:42:34 --> 00:42:36 this conversation leaves you thinking as much
00:42:36 --> 00:42:40 as it did for me because Craig's calm realism,
00:42:40 --> 00:42:43 his optimism, his pragmatism, and he's insistent
00:42:43 --> 00:42:46 that good behavior from all of us becomes good
00:42:46 --> 00:42:50 behavior in our systems, certainly offers a refreshing
00:42:50 --> 00:42:53 way of looking at AGI. A way that is neither
00:42:53 --> 00:42:58 alarmist nor naive. I think it's so rare to hear
00:42:58 --> 00:43:01 someone speak so openly about risks and probabilities
00:43:01 --> 00:43:05 and the design choices that are shaping the future.
00:43:06 --> 00:43:08 And it's even rarer to hear that level of honesty
00:43:08 --> 00:43:12 paired with such a practical hope and optimism
00:43:12 --> 00:43:15 for what intelligent systems could become when
00:43:15 --> 00:43:19 they draw on the values of billions rather than
00:43:19 --> 00:43:22 just a few. So, if Craig's perspective sparked
00:43:22 --> 00:43:25 something in you or challenged a viewpoint you
00:43:25 --> 00:43:27 held, I'd love to hear your thoughts. Should
00:43:27 --> 00:43:31 future AI systems be designed collectively? Is
00:43:31 --> 00:43:34 it realistic? And how should your business prepare
00:43:34 --> 00:43:37 for a world where AI shifts from just being a
00:43:37 --> 00:43:40 tool to an intelligent actor in its own right?
00:43:41 --> 00:43:43 Please send me a message on LinkedIn or drop
00:43:43 --> 00:43:47 me a note on the website which is just techtalksnetwork
00:43:47 --> 00:43:50 .com and as always... Thank you for lending me
00:43:50 --> 00:43:54 your ears of curiosity. So LinkedIn, X, Instagram,
00:43:54 --> 00:43:58 just at Neil CQs. Tell me, what ideas or what
00:43:58 --> 00:44:01 concerns did today's discussion stir in you and
00:44:01 --> 00:44:04 where do you stand on the road towards a more
00:44:04 --> 00:44:07 human aligned intelligence? Let me know. I'll
00:44:07 --> 00:44:09 be back again very soon with another guest and
00:44:09 --> 00:44:13 we will continue to explore AI at work. Speak
00:44:13 --> 00:44:15 with you next time. Bye for now.