Throughout our conversation, Craig explains why current alignment methods often rely on narrow viewpoints, which creates both ethical and technical blind spots. He shares his belief that the values guiding future intelligence should come from millions of people across cultures rather than a handful of researchers writing a constitution behind closed doors. Drawing on his work at Predict Wall Street, he illustrates how collective intelligence can outperform experts, why diverse viewpoints matter, and how these lessons shape the architecture he believes is needed for safe AGI and the superintelligent systems that follow. His clarity on the difference between tools and entities, and how quickly AI is shifting into the latter category, offers a grounding moment for anyone trying to navigate what comes next.
This episode moves beyond fear and hype. Craig talks openly about risk, but he also brings optimism about the potential for systems that are safer, faster to build, less costly, and more reflective of humanity. For leaders wondering how to prepare their organisations, he shares what signals to watch, why transparency and design matter, and how a more democratic approach to intelligence could shift the odds of a better outcome. If you want a clear, thoughtful look at the road ahead for AGI, superintelligence, and the role humans still play in shaping both, you will find a lot to chew on here.
Listeners wanting to learn more can explore superintelligence.com, where Craig and the iQ Company team share research, videos, papers, and ways to get involved. What part of this conversation sparks your own questions about the future we are building together?
Sponsored by NordLayer:
Get the exclusive Black Friday offer: 28% off NordLayer yearly plans with the coupon code: techdaily-28. Valid until December 10th, 2025. Try it risk-free with a 14-day money-back guarantee.

