3519: How Verdent AI is Building the Next Generation AI Coding Agents.
Tech Talks DailyDecember 14, 2025
3519
36:4326.01 MB

3519: How Verdent AI is Building the Next Generation AI Coding Agents.

In this episode of Tech Talks Daily, I sit down with Yuyu Zhang to unpack a shift that many developers can feel but struggle to articulate.

Yuyu's journey spans academic research at Georgia Tech, building recommendation systems that power TikTok and Douyin at global scale, and leading the Seed-Coder project at ByteDance, which reached state-of-the-art performance among open source code models earlier this year.

Today, he is part of Codeck, where the focus has moved beyond AI assistance toward autonomous coding agents that can plan, execute, and verify real engineering work.

Our conversation begins with a simple but revealing observation. Most AI coding tools still behave like smarter autocomplete. They help you type faster, but they do not own the work.

Yuyu explains why that distinction matters, especially for teams dealing with complex systems, tight deadlines, and constant interruptions. Autonomy, in his view, is not about replacing engineers. It is about giving them back their flow.

We explore Verdent, Codeck's autonomous coding agent, and Verdent Deck, the desktop environment designed to coordinate multiple agents in parallel. Instead of one AI reacting line by line inside an editor, these agents operate at the task level.

They plan work with the developer upfront, execute independently in safe environments, and validate their output before handing anything back. The result feels less like using a tool and more like managing a small engineering team.

Yuyu shares how parallel agents change both speed and predictability. One agent can implement a feature, another can write tests, and another can investigate logs, all without stepping on each other. Just as important, he walks through the safeguards that keep humans in control.

Explicit planning, permission boundaries, sandboxed execution, and clear, reviewable diffs are all designed to address the very real concerns engineering leaders have about letting autonomous systems near production code.

The discussion also turns personal. Having worked on some of the highest-scale systems in the world, Yuyu reflects on why developers lose momentum. It is rarely about raw ability. It is about constant context switching. His goal with Verdent is to preserve mental focus by offloading interruptions and letting engineers return to work with clarity rather than cognitive fatigue.

We close by looking ahead. The definition of a "good developer" is changing, just as it has many times before. AI is not ending programming. It is reshaping it, pushing human creativity, judgment, and design thinking to the foreground while machines handle the repetitive churn.

If autonomous coding agents are becoming colleagues rather than helpers, how comfortable are you with that future, and what would you want to stay firmly in human hands?

[00:00:04] Welcome back gang, in today's episode I'm speaking with Yu Yu Zhang, co-founder of Verden and a former AI researcher at ByteDance where he worked on large scale systems powering TikTok and advanced code models like SeedCoder. But today he's going to join me to explore the shift from traditional coding assistants to fully autonomous coding agents. What does this mean for the future of software development?

[00:00:34] Well we're going to take a look at everything from task level delegation to AI that can plan, execute and validate its own work. And our conversation will also look at how engineering workflows are being redefined in real time. But before I get my guest on today I want to give a quick thank you to my friends at Denodo.

[00:00:55] We're playing a big part in supporting this show because one of the questions I hear more and more from listeners on this podcast is why does AI succeed or why does it fail? Because let's be honest, AI is moving fast but success is often still elusive. Now most projects fail not because of the AI but because the data foundation isn't ready. This is why organisations are increasingly turning to Denodo.

[00:01:24] Denodo delivers trustworthy and AI ready data without the need to copy it everywhere. Essentially you can optimise your lake house, accelerate agentic AI and build data products that finally make self-service real and achievable. And with a powerful partner ecosystem teams get to value even faster.

[00:01:47] So if you're ready to understand why your AI projects fail and how to succeed with AI, simply visit Denodo.com and take control of your data world. So a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do? Yeah, hi everyone. My name is Yu Yu.

[00:02:09] I got my PhD in computer science from the Georgia Institute of Technology, if you know that school in the United States. And I joined ByteDance after that as a research scientist. My early work there focused on recommender systems for TikTok, where I contributed to quite a lot of launches for TikTok's recommendation models.

[00:02:33] And starting in, I think, middle 2023, I shifted my focus to projects related to large language models. And I was the project lead for Seed Coder, an open source code model that we trained entirely from scratch, from pre-training to all the way through post-training. And we released that model earlier this year. And at that time, it achieved state-of-the-art performance among models of a similar size.

[00:03:02] And so through that experience, I just got a front row seat to see how quickly AI coding capabilities were advancing. And I just became increasingly convinced that the potential of coding agents are super high. So, and then that's ultimately what led me to leave ByteDance and co-found Vident AI, where we just like fully focused on building next generation coding agents. Incredibly cool.

[00:03:32] For anybody listening, they're hearing about Vident AI for the very first time. Let's start with the basics. What problem in modern software development convinced you that engineers needed something beyond today's AI coding assistance that people know? And how would you describe your mission in simple terms of who you're helping and what you want to achieve here? Yeah, great question. So, I just want to quote from Jensen Huang's recent keynote speech at a Media GTC conference.

[00:04:01] And he said that AI is not just a tool. It's a worker. So, which I strongly agree. So, I think today's AI coding assistants are still designed to be tools. And my personal belief is that fully autonomous coding agents is the precondition of AGI. So, basically, it's an AGI in the digital world. We have to firstly implement AGI there, and then we can bring it to the real world.

[00:04:32] That's my belief. But I think we are still pretty far away to achieve that goal. And at Worden, we believe coding agents represent an entirely new paradigm in software development. And today, we are still at a very early stage. Lots of things in this field are being actively explored, like agent tooling, agent-human interaction models, and even cross-agent collaboration. So, the potential is just enormous.

[00:05:02] As coding agents become more and more powerful, we just expect that the entire software industry can be reshaped. And this will show up in every part of the software development lifecycle. So, if you look across the trillion-dollar software industry today, like from IDs to dev tools to database companies, cloud providers, every part of the stack is fundamentally, I think, human-centric.

[00:05:31] And I think agents will just force us to rethink all of these. Like testing agents will automatically generate, validate, fix tests autonomously. And DevOps agents will plan, deploy, or monitor systems. So, yeah, if we look at the SDLC, including product design, debugging, code review, deployment, CI, CD,

[00:05:57] each of these will be re-architectured and reshaped around just like agent-first workflows. And so, today's like AI coding agents products, including cloud code, codex, maybe people are familiar with these famous products. Yeah. They feel similar to the... So, this is from my personal perspective. Okay. So, they feel similar to the moment when GitHub Copilot first appeared.

[00:06:25] They are incredibly, incredibly useful, but they are not yet transformational yet. Yeah. So, I believe that the cursor moment for the Asian era has not arrived yet. And Verdon is working toward that. So, toward a world like where AI are not just coding tools, but like a team of autonomous collaborators or your colleagues that just managed by human developers. Yeah. Love it. So much information packed in there.

[00:06:54] And I think many people listening, when they hear of tools like Copilot, they think of it as almost a smart autocomplete. But you frame the problem very differently. So, how do you explain that shift from keystroke suggestions to outcome-driven delegation? And why does this difference matter so much, especially for real engineering work? Because this is the people that are using this. Yeah. Yeah. Very good question.

[00:07:18] So, in simple terms, I think Verdon's mission is to build the next generation of coding engines, where systems that don't just autocomplete code, but understand human developers' intent, help you plan the work, and also coordinate with humans, and also other agents together, maybe, and ultimately just reshape how software gets built.

[00:07:45] And so, yeah, I think the simplest way to explain this shift is like autocomplete just helps you type faster. But coding agents will help you get things done. I'm not sure if that makes sense. Yes. And so, let's see tools like Copilot, right? I mean, the GitHub Copilot, or even earlier generation of this coding assistant just help you type in faster.

[00:08:15] I mean, they're fundamentally just keystroke tools. And they live inside your editor, right? And observe what you're typing and just like trying to predict the next, maybe the current line or some next few lines. Yeah, they're definitely helpful. I love them. But they don't own any part of the engineering workflow. So, they don't decide like what needs to be done. And they don't run tools by themselves.

[00:08:44] And they don't validate their work. And they don't like deliver outcomes directly, right? So, you're still doing all the actual engineering stuff. And they're just making your typing speed faster. But like agentic coding is a complete different paradigm. Yeah. Because like for coding agents, they can use tools. Like they can run code. They can call APIs. They can like execute tests. They can inspect logs. They can query databases. That's why it's so different.

[00:09:14] And they can interact with your like real environment, like dev environment in your dev machine or even like in cloud machines or containers. And they can like plan and decompose work just like human do. They can validate the intermediate results and iterate over that. And they can deliver outcomes and not just like intermediate suggestions or something.

[00:09:39] They can just like just fully hand over the verified results to you. And that's actually the design philosophy of burden as well. Because we really want to build the next generation coding agent. So, yeah. Instead of giving these like micro level, keystroke level suggestions, I think coding agent just operates at the task level. And you give it a go. And you align with it, with the plan.

[00:10:06] And then you just delegate, fully delegate the task to the coding agent. And give it an environment, of course, just like a human do, right? You give access to your intern, to your dev machine. And then, okay, you delegate the task to a junior engineer. Yeah, just something like that. And then, okay, something magical will happen there. But, of course, I think humans still should be in the loop. And you should like review the results.

[00:10:33] And also, like at the beginning, you have to make a very good alignment with the agent. And then, just like, yeah, I think that this progress, I mean, this process is very different from just like keystroke suggestions. Yeah. Yeah. And in terms of getting things done, burden is positioned as this autonomous coding agent that can plan and deliver on complex tasks.

[00:11:00] And for somebody listening and imagining this for the first time and some of the opportunities, maybe we can bring it to life. If you can explain to me what a real project would look like when burden handles the work instead of a developer writing code step by step. Is there anything you could share there that would just help listeners think, oh, I get it straight away? Yeah. I think I can introduce two, like, unique features of burden. Yeah.

[00:11:26] So, one is the seamless multitasking and easy context reach. So, in Verdant, you can easily run multiple agents for multiple tasks with no friction. You can run, like, multiple sessions and multiple agents in parallel. And also, they will work in work trees. Maybe, yeah, people may not be familiar with that concept.

[00:11:47] But work trees means that it's basically an isolated environment or file system environment for these agents to run independently so that they won't step on each other. They won't, like, modify the same file to cause conflicts. Yeah. And so, one agent might be, like, implementing a feature. Another agent might be, like, debugging a test. And another might be, like, summarizing a document. Yeah. It's just, like, seamless multitasking.

[00:12:15] And actually, you can also do that in cloud codecs in all those, like, CLI tools, right? But the problem is that when you open a lot of, like, terminal tabs, it's very hard to do the context switch, right? After, like, half an hour. Because today, like, these polling agents can run longer and longer autonomously. And after half an hour, then you see pages and pages of, you know, text flashing around. And you don't know what happened.

[00:12:42] You don't just remember which tab belongs to which task. But in Verden, everything is, like, you can basically have – you don't need to memorize everything for each tab. And everything is just there. You can see all the prompts, all the file changes, and all the diff views. And we also have, like, a functionality called, like, diff lens, which will summarize all the code changes made by the agent for every step.

[00:13:10] So that you will – you're still in control of what the agents are doing. But you just don't – you don't need to worry about, like, okay, I don't remember which agent is working on which stuff. Yeah. So that's, like, a seamless context switch. And a second unique feature is that we – I mean, from day one, Verden is designed to be using a so-called plan called Verify Loop.

[00:13:34] So this means that before you delegate the task to Verden, it first will, like, do the alignment with human developers. So it plans the task and aligns the plan with you so that they – I mean, the agent will exactly know what you intend it to do.

[00:13:54] All the deep – maybe the architecture design and also, like, which function you want to modify or, like, which database you are going to use and what parts you don't want it to touch, right? Which code base part you don't want it to touch. And all these requirements should be aligned at the beginning before it starts coding or anything else.

[00:14:18] And then after this planning phase, it will just, like, hold autonomously, start coding tools, running jobs, exploring environment, just like a real engineer would do. And then finally, it will verify its delivered output before delivering to you, right? It will just, like, run tests, LinkedIn, and all these, like, stuff before they come back to you.

[00:14:44] So, yeah, this – when you combine this parallelism with this, like, human line loop, I think Verden will become something closer to an autonomous engineer. You give it a large outcome-level task, like, okay, implement this feature, helping migrate this model, or, like, fix this bug from our user's feedback, something like that. Just, like, help you clear the backlog, right?

[00:15:07] And it just, like, handles the end-to-end execution with quality and also, like, with all the self-verification before coming back to you. That's, like, what a real project feels like. So, you set the goals, and Verden handles the work. And when we're talking about this idea of running multiple coding agents in parallel on a desktop, it feels incredibly cool.

[00:15:29] But what kind of teams or projects are benefiting best from this approach, and how is it changing the speed and predictability of software delivery? Are you noticing any big changes here? I think Verden is especially powerful for teams that they can – if they can naturally break their work into paralizable work. I think most of teams will do that because, I mean, people are actually not, like, developing in a purely sequential way, right?

[00:15:58] Because we definitely divide and conquer. Everybody knows that. So, we, like, say we want to develop TikTok. It's not done by one people, one developer, not even 10 people, not even 100 people. Maybe you need, like, thousands of engineers working together to make it happen. So, you definitely need to divide and conquer. You need to divide into, like, multiple components.

[00:16:21] You have, like, specific teams for, like, testing, debugging issues, and, like, documentation, right? You have different tasks, but they are actually orthogonal most of the time. And then Verden can let you do the same thing with agents. And I think that running multiple agents in parallel just dramatically changes both speed and predictability. Because speed, like, comes from this concurrency, right?

[00:16:50] Because one agent can – like I said, one agent can implement a feature and another agent writing tests or another, like, debugging or something like that. They are not, like, blocking each other. And the predictability comes from Verden's just, like, structured plan called Verify Loop. And each agent has a clear plan and can auto-validate its work by itself before handing it back.

[00:17:16] So, yeah, you can get, like, consistent review-ready output instead of some guesswork or some, like, AI slump, right? So, yeah, this is more like managing a small engineering team. Instead of managing people, you are just managing a team or, like, a squad of AI agents. And you set the objectives and let the agents push multiple work streams forward at once.

[00:17:44] And then you collect the results. Yeah, just something like that. And then you may, like, divide again the tasks and then delegate to the agents again. Yeah, that's what I – that's what, like, Verden – how Verden works for those, like, companies in my mind. And you said the words AI slop there. And there will be some engineering leaders out there that are very cautious about letting autonomous systems touch production code.

[00:18:11] So, how do you – or how does your planning, execution, and self-verification cycles – how does that give organizations that confidence they need, while also, most importantly, still keeping the humans firmly in control there? I think that caution is completely valid. And I think for our product, Verden, it's designed around that reality. Because our approach gives organizations and users confidence.

[00:18:38] Because, like, every agent operates within the plan execution or plan code verify cycle that I mentioned. So, we keep humans in the loop and also we, I mean, make everything just in a safe container or, like, safe environment so that you can always, like, roll it back and not, like, cause any disastrous results. So, yeah, like, I can explain a little bit more.

[00:19:05] Like, for the planning part, you specify all the requirements and align with the agents before it starts to execute anything. And the planning is explicit and reviewable. So, before an agent, like, touches the code base, it will, like, propose a detailed, actionable plan for you to review. And also, like, if the agent – so, our agent feels that it has to clarify anything that you didn't mention or you forgot to mention.

[00:19:35] I mean, our product will just show you some follow-up questions for you to, like, do some, like, multiple choice questions. Or you can also, like, really add your inputs for each of those items. And then after that, it will generate a fully reviewable plan as a markdown document. And then you can modify it. You can approve it or reject it.

[00:20:03] If you reject it, then it will just iterate again. Like, you can just freely type in your suggestions, comments. And then until you converge on this planning, the agent won't start work. Yeah, after you guys, like, just work together on this plan and, okay, everything is good, looks good. And then we start to execute the plan. So, during execution, the agents will just use pools and running tests and inspect logs and something. Yeah, something like that.

[00:20:32] And it can also be running in the safe container, which will not touch your production machines. And so, yeah, always within the boundaries defined by the developer. And also, we have, like, permission control for the developers.

[00:20:50] So, if the developers want to see, like, what commands that the agents are executing, wants to, like, manually approve any one of them, it's also supported inadvertent. So, yeah. And also, in the future, we will support system-level sandbox. And it will control all the, like, file system operations and also, like, network operations. This is, like, for 100% safety.

[00:21:18] Yeah, it's, like, just, like, a log box and just, like, a container for the agents to run. And it's safe. It's real environment, but it's safe, controllable. And finally, for the verification part, this ensures the quality of the deliverables and the final outcomes.

[00:21:38] So, agents will validate their output with, like, writing, like, test code and using linters, compilers to examine if there's any, like, grammar errors and type checks and also do some runtime verification and even, like, end-to-end functional tests before producing any, like, PRs or commits to the users.

[00:22:02] And, like I mentioned, we have, like, diff lens and agentic code review functions so that you can easily review the artifacts that the agents deliver to you and not just, like, blindly accept any, like, code diffs. So, yeah, I think this structure and these designs gives, like, human developers just, like, the best of both worlds.

[00:22:26] So, agents that work autonomously at the task level and human developers just control their intent and also guardrails, safe environment, and also the final approval. Yeah. And on a personal note, when you, in your intro at the very beginning of this podcast, you told me you worked on some of the world's highest scale systems at TikTok and large code models at SeedCode.

[00:22:54] I've got to ask, going back, looking at your time there, what did those experiences teach you about why developers struggle to stay in flow and how are you trying to relieve that pressure? Because it feels like you've lived that world and created a solution because of it. But I'd love to hear more about that story. Yeah. So, one thing I can tell you is that developers don't lose productivity because they hype slowly. Yes.

[00:23:21] So, I think it's always because their working flows or their mind flows keep, like, keeps getting broken. I think at, like, large-scale teams, every task, every task, like, triggers constant, like, context switching. And I don't see any developers, I mean, in the real world, any developers, like, okay, say, I have something to do. It's purely linear. Just, like, one, two, three, and everything is done. It's not like that.

[00:23:50] It's always like you have some low-priority tasks, some middle-priority tasks, some high-priority tasks, and also some emergencies. And maybe you have some, like, on-call tasks, right? Like, I think two days ago, Cloudflare has some issues, right? And that brings maybe half of the internet companies down. Yeah. So, like, these, like, emergencies happen. And people are just, like, constantly switching between these different scenarios, different tasks.

[00:24:19] And this is really annoying because in the real world, it's not like you cannot just, like, fully focus on a specific task and just forget about other things. And you sometimes need to debug the production issues. You sometimes need to read the logs and checking dashboards, writing tests, and also reading the feedbacks from users and developing new features and fixing your colleagues' bugs, right? There's just so many things.

[00:24:47] And each interruption just resets your mental stack. And once you just fall out of this flow, and it can take you probably 30 minutes or even one hour or more to rebuild the context. That's the real, I think that that's real tax on engineering velocity. And for Burdent, like I mentioned before, we just, we are designed to be, like, multitasking with no friction.

[00:25:15] And because you delegate, you actually just start your task by alignment with the agents, right? And then after that, you can delegate the task to the execution part. And then you can literally forget about it and just, like, start a new task. Basically, a new window, new session window. And then you just immerse yourself in that new context window.

[00:25:39] And then when the other agents finish their task, they will come back to you with clear, reviewable artifacts. And then once you click back to that session, you just, like, immediately come back to your memories in that session. Because you see all the prompts, you see all the files, and also the diffs, and also the summarization of the diffs, everything there.

[00:26:05] So you basically have the preservation of the context handled by Burdent. And you don't need to worry about that. And so, yeah, I think Burdent just preserves this flow by offloading everything that constantly breaks your mind flow or workflow. And that gives the developers something that they almost never have today. So, like, uninterrupted attention and the freedom to focus on the parts of engineering and maybe some high-level architecture design.

[00:26:34] And just these things are more important instead of those, like, low-level details. Yeah. And we're recording this at the end of 2025. So as we look towards a new year and as companies will continue to move from experimenting with AI in the software development lifecycle or SDLC to relying on it in production, what do you think adoption will look like next year and beyond? And what kind of misconceptions or myths?

[00:27:04] Are there any of those things that you'd like to clear up for engineering leaders listening today? Because I suspect you hear quite a few in your work. So is there any myths you want to dispel today? So first of all, I think the definition of programmers or coders or, like, the definition of good coders will be changed. And this has actually happened many times in history.

[00:27:30] So if you put me, like, if it transferred me to, like, 70 years ago, I'm not a qualified programmer at that time. Right? Because, like, people use punch cards. People write, like, assembly language to program. That's the, like, standard programming way, like, 70 years ago, 80 years ago. But, like, things got changed rapidly and also constantly in the history.

[00:27:57] And people are just, like, I think the abstractions of programming just keep getting higher and higher. And we have, like, more and more advanced languages. Like, we have 4chan, Kobo at the beginning. And then we have Java, C, C++, right? And then, like, Python. And then, like, TypeScript. I mean, there are just, like, it's not, like, a fixed environment.

[00:28:22] And it's not always the way that people develop software is changing. That's the first thing that I want to mention. It's not like AI is the first time changing the way that people develop software. It's, like, constantly changing. And the second point is that I don't think AI is replacing programmers. I think it's just, like, expanding or amplifies what, like, a good engineer can achieve.

[00:28:51] And we're trying to, like, multiply the creativity of developers and make them just thrive in the AI era instead of, like, replacing them. Actually, I think for, like, junior engineers that people might, like, worry about, oh, maybe the junior engineers will just lose their job, right, because of this AI development. But I strongly disagree with that. Because just like I said, the definition of a good programmer will be changed.

[00:29:20] So people will think that, oh, wide coding is simple. Wide coding is just, like, you just talk in natural language. And you program in natural language. And natural language seems simple. But it's not, right? Just like I said, as human developer, you still need to care about the high-level design. And also, you have your taste of the design choices. And you have the call of, like, which database you should use and how many users you're going to serve.

[00:29:49] And also, the system design, architectural design, and also, like, the structure of your code base. And these things are still very, very important. And it's not, like, it cannot be easily replaced by AI, at least for now, right? And also, like, these human developers really know how to satisfy the need of human. AI is not good at that, I think. So I think humans are the best people.

[00:30:18] Of course, the best to know what humans would like to do and what humans' needs are. So I think maybe the AI will help these junior developers quickly catch up the, you know, the skill set of a senior developer or today's senior developer have, right? Because, like, previously, you need, like, five years training of all the details of coding. And then you become a senior developer. Then you can lead a small engineering team.

[00:30:48] But, like, with AI, probably from day one, if you have the skills of those, like I said, you have some organization management skills. You have some high-level concepts. And also, you have very clear mindset of how to satisfy people's needs or something like that. You can directly start leading a team of agents.

[00:31:10] And this is, like, moving those, like, junior, the quote-unquote junior engineers on the fast track of becoming a senior engineer or very, very qualified AI engineer in this, like, coding agent era. So, yeah. So I think I have to repeat my point of view is that AI doesn't replace programmers or coders.

[00:31:35] They're just multiplying their engineering excellence and amplifying their, like, talents to build more things and free them from the low-level stuff and make the world better. Yeah. Just like me put me in the 70 years ago, right? In the years of, like, 70, the 90s, 40s, 1950s, in that era. I'm not a qualified programmer.

[00:32:05] But you cannot say that I'm not a qualified programmer in, like, 2025. I'm definitely a professional developer there. So, yeah. I think the definition keeps changing. And recently, I talked to some very, very young people, like, from just, like, fresh-year college students or even, like, in high school students.

[00:32:28] I think the way they learn the programming is very, very different from what we or what I learned from my, like, school years. Because, like, they said, okay, the professors won't let them use ChatGBT to finish their homeworks. But everybody's using that. So, like, nobody's listening to those bulls that, like, oh, you cannot use AI to complete the test. No.

[00:32:56] I mean, AI is becoming the fundamental tools for everyone. And, like, for Verdant, I mean, just for, like, our company's interviews, we never use, like, lead code questions to test the capabilities of our candidates. Because it's just, like, pointless, right? It's just, like, you are testing whether you can use punch cards to program and the candidates cannot. And then you say you are not a qualified programmer. That's just not fair.

[00:33:22] Because, like, you are moving to a new era of, like, these coding agents here. So, also, the way you interview, the way they learn programming, the way we develop software, everything will be reshaped and revolutionized. So, I mean, for our interviews, we just let the candidates just use whatever AI you want to use. You want to use cursor, fine. You want to use cloud code, fine. You want to use ChatGBT, fine, definitely. Because you can use all of them in our daily work, right?

[00:33:52] We also use Verdant to develop Verdant. That's our daily work. So, I think the, yeah, just the skill set of a qualified developer or programmer are shifting. And the definition of a good programmer is also shifting. Yeah, that's my point of view. Wow. And a powerful moment to end on. But before I let you go, for anyone listening wanting to dig a little bit deeper on anything we talked about today,

[00:34:20] where's the best place to find out more information and connect with you or your team? Where would you like to point people listening? Yeah, you can just access our official website, verdant.ai. It's V-E-R-D-E-N-T instead of V-E-R-D-A-N-T. Yeah, it's a variant of the original word. And also, you're welcome to join our Discord server and give us any feedbacks. Well, love talking with you today.

[00:34:49] So much I learned about this move from AI auto-completion to true outcome-driven delegation in coding. It's fantastic what you're doing here. I'd love to find out more about how you continue to evolve, especially with autonomous agents that plan, execute, and self-verify. Ultimately, helping teams to ship faster and more securely. Absolutely love it. I think it's perfect for enterprise teams juggling multiple features, the dreaded deadlines and other security requirements.

[00:35:19] So much for me to take away. But thank you for starting this conversation today. Yeah. Thank you so much, Neil. And oh, one more point. We have a limited time free trial now, so people can just download. We have a VS Code extension and also a desktop app, which is not a VS Code fork because we just built it from scratch. So if people are interested, just go to our verlin.ai website and download it and try it for free. Wow.

[00:35:47] I think that was a fascinating conversation on how autonomous coding agents are changing the way that software is built. And I particularly enjoyed unpacking the difference between simple code suggestions and true outcome driven AI and the impact that that has on developer productivity and what this evolution could mean for the next generation of engineers. If you'd like to explore their work in more detail, you'll find all the relevant links in the show notes.

[00:36:13] But as always, I'd love to hear your thoughts on where you see developer workflows heading next. So tech talks network dot com LinkedIn X Instagram just at Neil C. Let me know your thoughts on this one. Love to find out more. But that's it. I've taken up far too much of your time today. I'll be back again tomorrow. Bye for now.