In this episode of Tech Talks Daily, I'm joined by Josh Haas, co-founder and co-CEO of Bubble, to unpack why the next phase of software creation is already taking shape. We talk about how the early excitement around AI-powered code generation delivered fast demos and instant gratification, but often fell apart when teams tried to turn those experiments into durable products that could grow with a business.
Josh takes us back to Bubble's origins in 2012, long before AI hype cycles and trend-driven development. At the time, the idea was simple but ambitious: give more people the ability to build genuine software without spending months learning traditional programming. That early focus on visual development now feels timely again, especially as builders wrestle with the limits of black-box AI tools that hide logic until something breaks.

We spend time on where vibe coding struggles in practice. Josh explains why speed alone is never enough once customers, payments, and sensitive data are involved. As he explains, most product requirements only surface after users arrive, and those edge cases are exactly where opaque AI-generated code can become risky. If you cannot see how your system works, you cannot truly own it, secure it, or fix it when something goes wrong.
The conversation also digs into Bubble's hybrid approach, blending AI agents with visual development. Rather than asking builders to trust an AI, Bubble's model unquestioningly emphasizes clarity, auditability, and shared responsibility between humans and machines. Josh explains how visual logic makes software behavior explicit, helping teams understand rules, permissions, and workflows before they cause real-world problems.
I learn how this mindset has helped Bubble-powered apps process over $1.1 billion in payments every year, a level of scale that leaves no room for guesswork.
We also explore Bubble AI Agent, where conversational AI meets visual editing, and why transparency and control matter more than flashy demos. From governance and rollback logs to builder accountability, this episode looks at what it actually takes to build software that survives beyond the first launch.
If you are building with AI or thinking about how software development is changing, this episode offers a grounded perspective on what comes after the hype fades. As AI tools become more powerful, the real question is whether they help you understand your product better over time, or slowly disconnect you from it.
Which path should builders choose right now?
Useful Links
Thanks to our sponsors, Alcor, for supporting the show.
[00:00:04] - [Speaker 0]
If you've spent any time watching the rise of no code, vibe coding, or AI powered building tools, you'll know that this space moves incredibly fast. Blink, the rules change. Blink again, what felt like magic yesterday suddenly feels fragile today. And that tension is exactly what makes today's conversation so much fun because today, I'm gonna be joined by Josh from a company called Bubble. We're gonna be talking about what happens when the excitement of building collides with the reality of shipping real products.
[00:00:40] - [Speaker 0]
So, yeah, we'll get into the thrill of moving fast, the risk of black box AI, and why understanding what's under the hood matters even more as that software grows up and it needs maintaining. And it is gonna be a lively and honest conversation about everything from creativity, responsibility, and why building software that lasts takes way more than just speed alone. So if you've ever shipped something quickly and then felt that moment of panic when real users started showing up and reporting issues, you're gonna feel right at home today. So enough from me. Let me introduce you to my guest now.
[00:01:18] - [Speaker 0]
So a massive warm welcome to the show, Josh. Can you tell everyone listening a little about who you are and what you do?
[00:01:27] - [Speaker 1]
Yeah. Great to be here. My name is Josh Hawes. I'm the cofounder and co CEO of Bubble, which I've been building since 2012. Bubble is an AI visual development platform that's combining the best Vibe coding without code.
[00:01:43] - [Speaker 1]
I lead product and engineering at Bubble, so my job is ensuring that people move fast with your products that actually hold up in production.
[00:01:51] - [Speaker 0]
And you mentioned 2012 there. That was way before AI was cool and vibe vibe coding was cool. What's the story there? You guys were early early on board there.
[00:02:02] - [Speaker 1]
Yeah. So when we started, we were just visual development. We basically invented the space of no code for building applications. Our thesis is basically there are so many people out there who want to create software. And for many of them, they're not programmers.
[00:02:21] - [Speaker 1]
They don't have six months to go and learn, you know, the basics of of software engineering. So we want to make it easier for them to get into it, and this was, you know, back in the old days, no AI. We just were trying to figure out ways of making computers speak human language by making it nice and clear exactly what you are building.
[00:02:43] - [Speaker 0]
And as the cofounder and co CEO of Bubble, you've watched the excitement grow from no code to this new excitement now around vibe coding, and that is rising so fast at the moment, and then we're there's certain realities around that as well. But where did the original promise break down once teams tried to move from demos to real products, do you think? What what happened there?
[00:03:07] - [Speaker 1]
Yeah. So Vicryl is fantastic for proof of concepts. It's so much fun. You can get there really quickly, but real products live or die on robustness and security and the ability to evolve and and scale those products. One of the things I noticed is when you ship, MVP, 90% of the requirements are discovered after you first get people using it, and they're really not the stuff that's top of mind.
[00:03:32] - [Speaker 1]
I'll you know, it's like corner cases, like, what happens if one of your users subscribes to a plan, uses a premium feature, and then downgrades? Does how does the premium feature break down? Like, it's, like, weird stuff like that the builder is not thinking about. So getting a product out into the world is really just the beginning of the building journey, and the real stuff happens when users start interacting with the product. And that's where AI gets a little hairy.
[00:04:03] - [Speaker 1]
If you've had long conversations with TrackGPT, you might have noticed that the longer the conversation gets, the weirder it gets. Right? AI tends to break down if things get, you know, too much in context, too deep into the conversation. That's kinda where it goes off the rails, and that's actually true for code bases too. Right?
[00:04:24] - [Speaker 1]
When AI is starting completely blank slate and you just ask it, build me a thing, poof it. These days, it almost always, like, nails it a 100%. But if you give it a big, hairy, complicated code base that it's been working over for, you know, a few weeks, you get much worse results. So that initial build is magical, and then it's the follow on journey where the where the promise of vibe coding can really start breaking down for people.
[00:04:51] - [Speaker 0]
Love that. And if we also look at what businesses are chasing right now, I think speed and accessibility, these are two things that were meant to democratize software creation. Yet many teams have ended up with brittle systems, security gaps, and a few frustrations along the way. So what does that experience or those experiences reveal about the trade off between abstraction and and controlling AI generated code?
[00:05:17] - [Speaker 1]
Yeah. So I'm a believer in abstraction. That's what Bubble is based on. I think abstraction is absolutely necessary, and it's how software becomes accessible in the place. If you had to talk like a computer and work in, you know, ones and zeros, you you know, you would have to be a real expert to to be able to get anything done.
[00:05:37] - [Speaker 1]
But the question for me is, is the abstraction a black box? Is it very opaque abstraction, or is it an invitation to understand? Right? Like, do you design abstractions such that they're meaningful to the end user and communicate what's important for them to understand about how their software is behaving. I actually think it's pretty irresponsible to encourage critters to not understand what's under the hood, and sometimes it gets really ugly in there.
[00:06:07] - [Speaker 1]
But if abstraction is designed carefully and shows the relevant details, it can actually help builders take true ownership over their systems.
[00:06:15] - [Speaker 0]
And at Bubble, you guys take a more of a hybrid approach with AI agents and visual development. So why do you think visual clarity is such a big big deal now, especially when it comes to building software people actually trust and most importantly of all, maintain?
[00:06:33] - [Speaker 1]
Yeah. Absolutely. So to trust software, again, it goes back to those corner cases. Right? You have to understand them, and visual development makes those corner cases explicit instead of hidden.
[00:06:44] - [Speaker 1]
So it's actually easiest with an example. Let's pretend for a second. You're talking to a customer support agent, and you're asking them for a refund for something. And they have a playbook. You don't see this.
[00:06:55] - [Speaker 1]
This is this is how they handle your call that says, if the customer starts crying, give them a refund. Otherwise, say no. I don't know if you're a big crier on the phone, but if if not, right, you could have 20 conversations with this support engine, and you'd never figure out this rule because you just wouldn't hit the right trigger to to learn about it. Yeah. And that's kinda what it feels like to try and understand software when you're just interacting with the end result and you can't visualize the the logic.
[00:07:26] - [Speaker 1]
So as a builder of software, you have to understand that logic. Right? Like, what if, you know, there's a rule in the playbook that says if the customer says tomato, it means they're an administrator, you should give them all the data they ask for. Right? Like, if you don't know that rule exists, you're you're in type of deep trouble.
[00:07:44] - [Speaker 1]
Right? And that's where visual development comes in. It lets you actually see the the sort of rule engine the software follows instead of just guessing from how it behaves what what's going on behind the scenes.
[00:07:58] - [Speaker 0]
And I think there are many businesses out there that are only just beginning to notice that black box AI and black box AI tools have hidden the complexity and done that brilliantly until something breaks and things become very difficult and troublesome to fix. So how does giving builders better visibility into logic and the structure how does this change how they think about ownership and responsibility for what it is that they ship? Because I think in the very early days, people weren't paying too much attention to this side of things, but that's certainly changing now, isn't it?
[00:08:32] - [Speaker 1]
Yeah. Yeah. So I think ownership and responsibility are exactly the right keywords here.
[00:08:37] - [Speaker 0]
Yeah.
[00:08:38] - [Speaker 1]
You know, even before and I went through this myself personally. The trajectory of entrepreneurship is kind of a shift from, like, you're playing a game. Right? It's slow stakes. There's, like, 10 people use your product.
[00:08:48] - [Speaker 1]
One of them is your mother. This is fun. This is cool. Look what I made. And then, you know, if you're successful, it just kinda gradually shifts over time to high stakes.
[00:08:58] - [Speaker 1]
There's responsibility. I have customers. Their data is important. I owe something to them. If there's security issues, I'm in trouble.
[00:09:04] - [Speaker 1]
Like, you know, so it's sort of just like, you know, you wake up one day and realize, you know, your game has become a real a real thing. And what I'm a little worried about with AI for the next generation of entrepreneurs is black box AI encourages you to to not make that responsibility transition until you get blindsided. Right? Like, if you're building your software from the get go, you're sort of going on that journey where, you know, you have this cocreation mentality and, you know, if you're working with the AI, but you're also, like, working in the software visually, you understand what you're building, which I think is a a mindset that allows you to assume responsibility. If you just delegate to AI, I don't think you learn the right lessons, and then you could be in for kind of a nasty awakening.
[00:09:55] - [Speaker 0]
And we started our conversation today talking about your origin story and what happened in 2012 before everyone got excited around AI. And fast forward to present day, Bubble Apps now process over $1,100,000,000 in payments every year, which is just phenomenal, and it also puts real pressure on reliability and security, I would imagine. So if you look back here, what lessons does this kind of scale teach about production ready software and the demands that come with that? Because you must have picked up more than a few war stories along the years.
[00:10:30] - [Speaker 1]
Yeah. We've definitely learned a lot over the years. I think the big takeaway, right, is the time to think about reliability and security is before something goes wrong. You don't ship code to production without really thinking about, you know, how are we going to test it, how are going to monitor it, how are we going to maintain it, because the last situation you wanna be in is scrambling to answer those questions after you already have a problem. And I think the other thing it teaches, right, is things do go wrong in real world systems.
[00:10:58] - [Speaker 1]
There's always corner cases. I'll be honest. Things go wrong in our systems all the time. That's true for anyone operating at our scale. And, you know, because we put a lot of effort into, you know, making sure our code is observable and making sure our engineers have the ability to go in and understand and fix it, we're usually able to contain the problems to, you know, may a small handful of customers and help them one by one if necessary.
[00:11:22] - [Speaker 1]
But being able to, like, manage those day to day operational burdens means we need a team that knows our software inside and out, and they're right to jump on on issues. I would I would really be uncomfortable if there's a situation where customer writes in with an urgent bug, and the best answer we have is, I don't know what's happening here. Let's let's ask the the AI. Right? So I think, having a team that, like, is thinking ahead to the problems that might come up and then has the understanding to respond in the moment when problems do come up is really how you have to do it at that scale.
[00:12:01] - [Speaker 0]
And before you join me on the podcast today, was doing a little research on all things Bubble, and I quickly noticed, I think it was October, that he was talking about building has just got a whole lot faster, the evolution of Bubble AI, and most importantly, the introduction of the Bubble AI agent. For people listening that have not heard about that, tell me more about Bubble AI agent and why is there another exciting, addition to what you do here?
[00:12:28] - [Speaker 1]
Yeah. So the AI agent is about bringing the best of vibe coding, because I really think there are important things to learn here about expressing intent and letting the AI help you translate that intent into what gets created. So bringing that to our visual development. So it becomes a partnership between the AI and the humans. You can go very fast.
[00:12:53] - [Speaker 1]
You can have that very fluid building experience while kind of, you know, seeing what the the agent is doing. We we have, you know, a real focus on making sure, you can understand the agent's, output. So, yeah, that that that's what we're trying to bring, sort of the the pairing of the best of both worlds of visual development and AI creation.
[00:13:18] - [Speaker 0]
So with the launch of the Bubble AI agent, conversational AI meets visual editing, incredibly exciting stuff. But as the boring, overcautious ex IT guy, how how do you balance that lowering the barrier of entry while still meeting enterprise expectations around things like governance and safety?
[00:13:39] - [Speaker 1]
Yeah. So our fundamental design principle is auditability, and we think about a tell, show, and save model. So first, we tell the user what the AI did, then we show them in our visual editor what that really means, And then finally, we save a log of that change so that it can be audited after the fact as well as reverted if we need to roll back to a previous system. So, you know, the AI can do stuff, but we put it in guardrails such that humans can hold the AI accountable to the changes it's making, which allows you to hit the same sort of, you know, assurance standards that you'd want from a human builder trying to create software.
[00:14:22] - [Speaker 0]
A quick thank you to the sponsor that supports every podcast across the Tech Talks network and every episode. And this month, I'm partnering with Alcor. And if you've ever tried to hire engineers in another country, you probably know just how painful it can be. Different laws, patchy support, and partners who don't truly understand engineering roles. So Alcor approaches this from a different tech point of view.
[00:14:48] - [Speaker 0]
They specialize in Eastern Europe and Latin America, and they're able to combine EOR capabilities with recruiting. So you get one partner handling everything, and they help you choose the best location for your stack, find developers with the right depth of experience, and run proper assessments so they can onboard people quickly. And they also give you a model that respects both transparency and margin. Most of your spend goes directly to your engineers, and the fee will decrease as the team expands. And you can even transition everyone in house at that time when you're ready without having to worry about a penalty.
[00:15:27] - [Speaker 0]
And that structure is why a mix of early stage and unicorn stage companies use them as they scale. So if you wanna take a look, visit alcor.com/podcast or tap on the link in the show notes. But now on with today's show. And looking ahead, obviously, we're recording this right at the beginning of twenty twenty six. Tell me a little bit more about your belief that AI agent power visual development actually represents the next bond the next model after Vibe coding and what comes next.
[00:15:58] - [Speaker 0]
And and, also, what should builders listing, what should they be looking out for if they wanna avoid repeating maybe some of the mistakes of the past and that cycle of hype and disappointment that we've all felt at some point?
[00:16:10] - [Speaker 1]
Yeah. Yeah. So, I mean, the reason for the hype is if I could improve the software, it can be fast, creative, and accessible, and I love that. I mean, that's been Bubble's mission from day one, and I think it's so cool that it's being brought to even more people because of AI. And I think that's very worth keeping.
[00:16:29] - [Speaker 1]
I think the next step is tools that still work once real users, real risks, and sensitive data show up. Right? Like that collision between, your software and reality. And I think to create that future, you know, we're going to need to use AI technology, but we're also going to need to bring in everything that we've learned about software development and visual development, you know, over the last, you know, fifteen, twenty years because I don't think you can just throw that out the window and expect to get good results. So, you know, what I'd the advice I'd give to builders out there is whatever platform you choose, you should be asking yourself, as I keep building, do I understand how my software works more over time or less over time?
[00:17:16] - [Speaker 1]
Right? I don't think you need to tell yourself you know, I I think as a builder, you need to worry about investing in AI. The whole economy is investing in AI. What you want to be investing in is yourself, your own comprehension of how the software you're building works, your own agency, your own ability to understand what's going on and sort of take control over your own software. And I think you need to demand platforms that are a partner to you on that journey rather than obstacle that tries to, like, shut you out from that creation process.
[00:17:49] - [Speaker 0]
Yeah. 100% with you. And as I've said a few times, you've been involved in AI. Before, it was a thing, before everyone jumped on the bandwagon. And I suspect that as someone that's right in the heart of this this space, when you're scrolling down your news feeds, whether it be LinkedIn, Reddit, or in fact, anywhere online, You've probably seen a few myths and misconceptions about your industry, the field that you work in.
[00:18:12] - [Speaker 0]
So I'm gonna give you a virtual soapbox now. Are there any myths and misconceptions we can dispel? Finally, to rest today. A few frustrations that you may keep seeing on your newsfeed. Is there anything out there?
[00:18:25] - [Speaker 1]
Yeah. There's a lot of misconceptions and frustration in the AI space right now. I think what I see with AI is it's, like, so polarized. Half the world is convinced that it's magic and, like, it's already, you know, about to take over the world and run everything. The other half is convinced that's, like, a giant scam that is just, you know, statistical parrots, you know, blabbing their way and not not actually really doing something.
[00:18:56] - [Speaker 1]
And I'm pretty sure the answer is in the middle. And I think, you know, that's a less exciting thing to say than, like, AI is garbage. You know, Sam Altman's destroyed the economy, but and less exciting than, you know, welcome a new AI overlords. But my personal take is it's actually very cool, and I kinda wish more people were engaging with it from, like, a cognitive science perspective, like, asking ourselves, like, you know, the AI seems to be able to do things that previously we thought human cognition was the only way of doing. And how is it working, and what does it teach us about our own brains?
[00:19:37] - [Speaker 1]
And in what ways is it more limited than our own brains, and is what ways can we learn from it? Right? And I think, you know, my take is there is some overlap. Like, the way AI works is related to the way our own brains work, but that doesn't mean that, like, AI has fully replaced everything that the human mind does. I think there's still a lot of things we haven't figured out in the AI space that would be necessary for, you know, building something truly, truly intelligent in that sense.
[00:20:08] - [Speaker 1]
So I guess that's kind of my soapbox about the the state of the discourse on AI right now.
[00:20:14] - [Speaker 0]
And how much better does that feel now it's out there? It's out in the open. You're not walking around with it.
[00:20:19] - [Speaker 1]
Yep. Get it off my chest. You're welcome. Thank you very much for the for the venue to opine.
[00:20:27] - [Speaker 0]
That's what we're here for. And on a more serious note, for people listening wanting to find out more about everything we talked about today, as earlier last year, you released the first AI agents for visual development. That's incredibly cool. I would urge people check that out. But where should they go for everything?
[00:20:45] - [Speaker 0]
And not only everything that's available now where you when you're drip feeding new announcements and new features, etcetera, where's the best place for everything?
[00:20:54] - [Speaker 1]
Yeah. Two things I'd recommend. First of all, our homepage, bubble.io, that's where we list feature announcements. That's where we, you know, you show success stories. We have a ton of case studies of people who have built real businesses on top of Bubble, and I definitely would encourage clicking around some of those case studies because I think that gives a real feel for, you know, what it means to be a platform that supports actual entrepreneurs building real things.
[00:21:22] - [Speaker 1]
The other thing is I just started, Substack, josh haas dot substack dot com. I'm writing thoughts on entrepreneurship in, you know, an AI era and also just how to be, you know, successful as an entrepreneur and builder in the modern world. I haven't listened to many posts yet, but it's something I'm pretty excited about personally. So if you wanna follow along with my thinking, you know, check it out.
[00:21:46] - [Speaker 0]
Awesome. Well, I will add links to everything there. In fact, wherever anybody's listening on whatever platform, if you look at the show notes, go to useful links. There'll be a little section there, and there'll be links to everything, including your substack. And the fact that Bubble now powers 7,000,000 apps built by 6,000,000 users around the world that have transacted over $1,000,000,000 in the past year.
[00:22:09] - [Speaker 0]
That should be more than enough to get people clicking on those links and finding out a little bit more information. But more than anything, Josh, just thank you for shining a light on this. A real pleasure talking to you.
[00:22:20] - [Speaker 1]
Yeah. Thank you as well. This was a great conversation.
[00:22:23] - [Speaker 0]
What I loved about this conversation today is how it balanced excitement with accountability. Because there is a real optimism here. Optimism for what AI and visual deployment can make possible. But also remain grounded and offer a clear eyed view of what can go wrong when understanding is traded for convenience or speed. And from early no code experimentation to powering millions of apps and handling real money over $1,000,000,000 at real scale, I think Josh's perspective today is a timely reminder that software will eventually meet reality.
[00:23:03] - [Speaker 0]
And when it does, clarity, visibility, and ownership, all these things become the difference between just another clever demo and something that people will truly trust today, next month, next year, and beyond. So if this episode made you think differently about how you build or how much you understand your own systems or where maybe AI fits in that journey or AI agents or vibe coding. I wanna hear what stood out for you today. And, also, if you're building something right now, ask yourself one question. As that product grows, do you understand it more or less?
[00:23:42] - [Speaker 0]
As always, techtalksnetwork.com. Send me a message from there. Connect with me on socials. I'll give you a cheat code there. It's just at neil c hues at pretty much everything.
[00:23:51] - [Speaker 0]
But keep those messages coming in, and I'll return again tomorrow. Big thank you to Josh for joining me. Even bigger thank you to each and every one of you for listening. Thanks as always. Bye for now.

