Mike brings a deep, practical perspective on the evolution of AI inside complex organizations. He unpacks how AI agents are moving well beyond basic chatbots and starting to integrate into actual business workflows—performing as teammates that can reason, adapt, and even collaborate with other agents. We dig into examples like Klarna's workforce transformation and examine how this shift could play out across customer service, internal ops, and software development.
We also look at what’s fueling the boom in open source AI and how companies are navigating the balance between transparency, IP protection, and regulatory readiness. Mike shares why some financial services firms are turning to in-house fine-tuned models for greater control, and how open-weight and fully open-source models are starting to gain real ground.
Another key theme is the momentum behind small language models. Mike explains why bigger isn’t always better—especially when it comes to data privacy, edge deployment, and resource efficiency. He outlines where SLMs can outperform their larger counterparts and what that means for companies optimizing for security and speed rather than brute force compute.
We also discuss Thoughtworks' forthcoming global survey, which reveals a growing divide in generative AI adoption. While mature players are building in bias detection and robust compliance frameworks, newer entrants are leaning toward fast operational gains and interpretability. This gap is shaping how GenAI projects are prioritized across industries and geographies, and Mike offers his take on how leaders can navigate both speed and safety.
So, what role will explainability, regulation, and open ecosystems play in shaping the AI tools of tomorrow—and what should business and tech leaders be planning for now? Let’s find out in this wide-ranging conversation with Thoughtworks.

