3536: When AI Knows Us Too Well and What It Means for Human Choice
Tech Talks DailyDecember 30, 2025
3536
35:3127.18 MB

3536: When AI Knows Us Too Well and What It Means for Human Choice

What happens when the systems designed to make life easier quietly begin shaping how we think, decide, and choose?

In this episode of the Tech Talks Daily Podcast, I sit down with Jacob Ward, a journalist who has spent more than two decades examining the unseen effects of technology on human behavior. From reporting roles at NBC News, Al Jazeera, CNN, and PBS, to hosting his own podcast The Rip Current, Jacob has built a career around asking uncomfortable questions about power, persuasion, and the psychology sitting beneath our screens.

Our conversation centers on his book The Loop: How A.I Is Creating a World Without Choices and How to Fight Back, written before ChatGPT entered everyday life. Jacob explains why his core concern was never about smarter machines alone, but about what happens when AI systems learn us too well. Drawing on behavioral science, newsroom experience, and recent academic research, he argues that AI can narrow our sense of possibility while convincing us we are gaining freedom. The result is a subtle tension between convenience and control that many listeners will recognize in their own digital lives.

We also explore the idea of AI companies behaving like nation states, accumulating talent, influence, and authority without the checks that usually accompany that kind of power. Jacob reflects on the speed of AI deployment, the belief systems driving its biggest champions, and why individual self control is unlikely to be enough. Instead, he makes the case for systemic responses, cultural guardrails, and a renewed focus on protecting human skills that cannot be automated away.

There is room for optimism here too. We talk about where AI genuinely helps, from medicine to scientific discovery, and how leaders can hold hope and skepticism at the same time without slipping into hype or fear. From preserving entry level work as a form of apprenticeship to resisting the urge to outsource thinking itself, this episode offers a thoughtful look at what staying human might mean in an age of intelligent machines.

Jacob has also appeared on shows like The Joe Rogan Experience, This Week in Tech, and The Don Lemon Show, but this conversation strips things back to fundamentals. How much choice do we really have, and what are we willing to give up for frictionless answers?

If AI is quietly closing the loop around our decisions, what does fighting back actually look like for you, and where do you think that line between help and influence should be drawn?

Useful Links

Tech Talks Daily is Sponsored by Denodo

[00:00:04] Welcome back to the Tech Talks Daily Podcast. Now today I'm joined by Jacob Ward. He's a journalist, author and host of the Rip Current podcast. He's also someone who has spent the last two decades reporting on the hidden forces that shape our relationship with technology.

[00:00:25] And Jacob's career spans NBC News, CNN, Al Jazeera and PBS. And here's someone that is consistently focused on the psychological and societal impact of emerging technology. Long before generative AI became a daily tool for millions of people, Jacob was already asking deeper questions about how AI shapes our choices, our behavior and ultimately how we see ourselves.

[00:00:53] And for example, his book, The Loop, How AI is Creating a World Without Choices and How to Fight Back, that was published before ChatGPT entered the mainstream. Yet it did anticipate many of the tensions that we now live with every day. Yep, from algorithmic personalization and recommendation engines to the subtle narrowing of options that feels like convenience on the surface.

[00:01:21] But there's something a little more troubling underneath. So, in this conversation, we'll look back at that book with the benefits of hindsight, what Jacob got right, what surprised him, how his thinking has evolved now that AI agents, large language models and hyper-personalized systems are becoming embedded in work, media and everyday life.

[00:01:42] But also talk about creativity, critical thinking, power, public trust, and why the most important AI conversation is not about replacing humans, but about which parts of humanity we choose to protect, amplify or in some cases outsource. But before I get my guest on today, I want to give a quick thank you to my friends at Denodo, who are playing a big part in supporting this show.

[00:02:11] Because one of the questions I hear more and more from listeners on this podcast is, why does AI succeed or why does it fail? Because let's be honest, AI is moving fast, but success is often still elusive. Now, most projects fail not because of the AI, but because the data foundation isn't ready. This is why organizations are increasingly turning to Denodo.

[00:02:36] Denodo delivers trustworthy and AI-ready data without the need to copy it everywhere. Essentially, you can optimize your lake house, accelerate agentic AI, and build data products that finally make self-service real and achievable. And with a powerful partner ecosystem, teams get to value even faster.

[00:02:59] So if you're ready to understand why your AI projects fail and how to succeed with AI, simply visit Denodo.com and take control of your data world. So enough scene setting for me. Let me introduce you to Jacob right now. So thank you for joining me on the podcast today. You've had an amazing career, but for people listening, hearing about you for the first time, listening all over the world, tell them a little about who you are and what you do.

[00:03:29] Well, Neil, I really appreciate you having me on. So I'm a journalist. I've been a journalist a long time. I covered the first internet boom. And then, you know, back when e-commerce was a concept I had to explain to people for the very first time in conversation. And now I am, you know, still covering at a time when, you know, for a while there, I was having to explain what AI was in the first 30 seconds of any report I was doing.

[00:03:52] Now, of course, we're in this crazy world of commercial AI, and I have spent a long time covering that. I wrote a book that came out about a year before Chad Chibutti did that was predicting the rise of commercial AI and the effect that I thought it would have on not just business, but also society and frankly, our minds, the ways in which we make decisions as humans.

[00:04:16] I've covered sort of the intersection of technology and human behavior for most of my career. That's been the focus. And I did that at Al Jazeera, where I was a correspondent, at NBC News, the big American broadcaster, where I was a correspondent. And now I run my own podcast and newsletter called The Rip Current, which looks at the sort of invisible ways in which technology and business and money kind of work on us individually and as a society. I love that.

[00:04:43] Looking at your career there, surrounded by the latest technology and trying to make sense of it and then putting it in a language that everyone can understand for audiences around the world. And I also love how you've done something about that. And you wrote this book, The Loop, how technology is creating a world without choices and how to fight before generative AI becomes part of our everyday life. And, of course, we all know how that story ended. It is part of our everyday life now. But tell me more about the story behind the book.

[00:05:09] Well, so the book was very much inspired by two things that were going on in my professional life at that time. One was I was on a big documentary series called Hacking Your Mind, where I got to meet researchers all over the world who studied the very automatic and predictable ways in which human beings make decisions.

[00:05:27] And it turns out we are incredibly automatic and incredibly predictable that when you speak to behavioral researchers about the kinds of decisions and patterns in our lives that you and I, I mean, going into this, I assumed were totally under my control. That I thought, you know, I'm just making my way through this world. And it turns out that, as any behavioral scientist can tell you, the vast majority of our decisions are being made at a very unconscious, instinctive level. And there are real patterns to that.

[00:05:56] At the same time that I was learning about that and having my worldview exploded by that, I was then also in my day job as a technology correspondent encountering all, this is the mid-2010s, all of these various companies trying to use what was at the time a pretty primitive form of AI. To predict and in some cases shape human behavior wherever possible. And I began to realize, wow, this technology is really going to make that possible.

[00:06:23] And because although I don't have huge confidence in the decisions that businesses tend to make around the deployment of this stuff, I have enormous confidence in their being able to make it better and better technologically, more and more functional technologically. So I just figured, you know, I knew that this was going to be a focus of this industry. And then around 2017, 2018, the transformer models start to arrive.

[00:06:52] Those are the models that made it possible for something like ChatGPT to be created. And before I knew it, I was in the throes of writing this book that I knew, you know, I just was basically trying to say, listen, if we let the market run wild with this, this is going to be far more impactful than the printing press, you know, television, all of these forms of media that in the past we've said, oh, that of course, you know, reprogrammed the brain.

[00:07:19] This truly, it seems to me, is going to do that based on the work I've seen going on in these private corporations that are creating these foundational models and the science that shows us just how predictable and manipulatable we are. And I'm curious, if you were to look back now, which parts of your thesis feel almost uncomfortable in just how accurate they turn out to be? And where did reality surprise you?

[00:07:46] Well, I really, you know, you write a book like that and you think you want to be right, Neil, you know, you think you want to be correct. And then it happens and you go, oh, no, this isn't what, you know, I wanted to be like, I wanted to be correct, but I wanted to be maybe five or six years early, which is what I thought I was. You know, I didn't realize just how quickly this stuff was going to get deployed. I knew it was out there and in the works.

[00:08:10] You know, I was in touch with a lot of the people at the top of what are now the big foundational companies back when they were just kind of floating around, you know, shopping their ideas. Anybody would listen. And I figured they were further away from a real product than they were. I didn't write. I didn't know that, you know, just a couple of years after chat TPT was released, they would have 800 million weekly users be on track for a billion. Right. It's like, I just didn't, I thought I was early and I was not as early as I thought.

[00:08:38] And then I would also say, I, I don't think I really understood the near religious fervor that a lot of these CEOs and, and developers bring to their work. That they, they, you know, I figured they would, that these companies would be committed to this stuff as a, as a purely money-making enterprise.

[00:09:03] I didn't understand that they would be committed on this very deep, almost faith-based level. When it comes to the rhetoric we hear around, you know, this is going to usher in a utopia and it doesn't matter if it's deeply destructive to the ways of life. We've, we've become accustomed to in the meantime, you know, job displacement and the reprogramming of education and all of that stuff.

[00:09:26] They just seem sort of willing to pay that price to get to this nirvana, which is really the kind of thing they, they seem to be promising us, um, you know, around the corner. I would say, you know, those things I didn't see coming. And of course I didn't see the, you know, I w I was used to the, I was writing this book at a time when, when the focus of government was on trying to grapple with the effects. These companies were having on everybody and trying to regulate those effects.

[00:09:55] Now we're in a political climate in which at least in the United States, you know, all the guardrails are off. And, and, and president Trump is actively trying to take away the S the, the state's rights to regulate this stuff in the interest of creating as little regulation around this as possible. So I, I didn't see just how open a regulatory playing field they, these companies would have.

[00:10:17] And I didn't see the, the, like I say, this kind of, you know, converts zeal that they would bring to, uh, plowing ahead with this, no matter the short-term cost. And the core idea of this book is that AI can narrow our choices by learning about us all too well. And I must admit it's something I see in everything from Amazon, Netflix, and Spotify.

[00:10:40] And very often I find myself yearning to stray away from the algorithm and embrace something different serendipity. You know, something that I shouldn't like on paper in the same way that people discovered Sex Pistols, Pink Floyd 40, 50 years ago. But for listeners who may feel overwhelmed by this claim, but they have seen evidence, can you just walk us through a simple real world example of, of how this stuff shows up in our everyday life? Yeah.

[00:11:06] So there's a new paper that just was released at a conference here in the United States called NeurIPS, which is the big academic conference around AI that sort of sets the intellectual tone for the year, typically on, on what we're going to be pursuing and, and how to improve the technology. And this paper showed, um, I can't remember the full title of it, but the, but the main thrust of it had to do with the phrase artificial hive mind.

[00:11:31] And what they were basically talking about was they, they took a corpus of, um, uh, prompts of the kinds of questions you might ask a chat bot. And these are very open-ended creative questions. These are not, you know, add this up for me or what's the address of my post office. It's, it's very open-ended kind of brainstorming questions. So they threw tens of thousands of those questions at 70 different large language models, including the ones that we use every day, right?

[00:12:01] The, the big commercial ones. And in the outputs that they got, they documented that not only inside each model, was there a narrower and narrower set of responses, but across all of the models, those responses were both very narrow and had a lot in common with one another. So for instance, if you ask one of these models, um, give me an allegory for time.

[00:12:29] I want to write a poem about time and I want you, you know, write me a poem about time and describe it for me in allegorical terms. It will almost always default to the concept of time as a river flowing down, blah, blah, blah, blah. You know, it's this cliche that you could imagine getting out of a kind of first year university students, uh, you know, poetry.

[00:12:50] And that, so, so the, the, the, the loop for which I am naming this book is for me, this perception that this technology is going to create as these companies promise greater and greater free time. And that it's going to somehow I've, I've talked to CEOs who talk about it, sort of uncorking the, the hidden creativity of human beings who don't have the technical skills to implement it. That somehow it's going to expand our thinking.

[00:13:19] Whereas with a paper like this shows, and I think that, as you mentioned, there are many, many ways we experience this anecdotally, but people are beginning to really quantify it. This stuff is actually pushing us into a narrower and narrower set of expectations around, uh, you know, what, what we read, what we listen to, what we see. And that problem, that cognitive dissonance of feeling like we're getting more when in fact, we're going to start getting less and less is the heart of the argument I'm making. A hundred percent with you.

[00:13:48] And I think eternal optimists listening, they will say personalization is more convenience rather than control. But where do you, where do you draw the line between technology that actually assists us and helps us make better decisions and systems that are quietly manipulating us or starting to make decisions for us or, or nudges in the direction that it wants us to go? Yeah. Well, I think one of the, you know, there are a couple of big problems in the Venn diagram of problems that we are looking at around this technology.

[00:14:14] So one of them is that human beings have an incredibly powerful ability to anthropomorphize, you know, technology to believe that it's their friend, that it knows them on some kind of personal and human level. And to then form attachments to those technologies in a really, you know, a way that if you were to try to describe it to an extraterrestrial, they wouldn't understand why, why, why would you fall in love with the tool you made? You know, that kind of thing.

[00:14:43] But we, we have that capacity. It's a deep, deep, uh, instinctive capacity proven by, you know, 50 plus years of behavioral science. And the, so, so that's one problem, right? That's the circuitry that we have. That's the, the, the system that we use to experience a technology like this. Then you have the, the, the profit motive of these companies, right? This is not a technology coming out of universities. It's not even a product that's coming out of, you know, military research, which is where the internet came from.

[00:15:11] This is a purely for-profit enterprise, sometimes dressed up in the marketing language of, of something more noble than that, but it is a company fundamentally that has to make its money back. And so it is, you know, there is, there are design choices being made by these companies that really play on that circuitry I described. You know, the fact that these systems speak to you in the first person, I will do this for you is right there a piece of marketing. You don't need that thing to say, I, why would it, why would it need?

[00:15:41] To refer to itself as if it is a being, it could say instead something like, you know, the, the, the statistical patterns in the data suggest this answer, you know, that kind of thing. It can speak to you the way a robot does. And instead it, it pretends to be your friend in this way that for, we know millions of people is in some cases, you know, prompting real mental illness in, in folks. And I think that all of us, to some extent are going to get pulled in by that.

[00:16:09] So for me, there is a, a distinction between this thing as, you know, I, like I, I welcome its use by, for instance, scientists. You take this technology and aim it at the stars or aim it at, you know, pictures of, of discoloration on the skin of a human. And to figure out whether that's cancerous or not. Fantastic. Let's go.

[00:16:32] I'm thrilled for that kind of use, but the working its way into our lives, the way that these for-profit companies do and are trained to do. That is the, is the threshold that I'm uncomfortable about. The last thing I was going to say on this is, you know, just to give you a sense of just how powerful that programming is for, you know, in these for-profit companies and how they build things. There's a famous thing that, that was popularized at Google.

[00:16:59] It's a rule that they use in, or used at least for years in, in green lighting new projects. Somebody comes up with like, I want to build this thing for Google. And they, one of the ways that they evaluate that is what's called the toothbrush rule. And that rule is, can this product form a habit in someone's life that causes them to use it at least three times a day, right? That's the kind of thinking that goes into the building of this stuff.

[00:17:24] And so for me, you combine that with a technology that is, and, and this is just at the primitive, like typing with it stage. It hasn't even begun talking to us in the, in the sort of ways that it's going to. All of this, I think really causes us to grow attached to it in a kind of AI distortion is what I'm calling it. You know, it's the, it's the way in which it distorts our perception of what it is and who we are to it. And all of that, that makes me very uncomfortable.

[00:17:52] And for the last 30 years, we've both seen the tech journey from internet to mobile to cloud and now AI. And as someone that spent years reporting for NBC news, CNN, Al Jazeera and PBS and so many others, how have you seen that relationship between technology power and public trust shift in the last 10 years alone? What have you seen here? Well, it's very interesting.

[00:18:15] You know, we were, I was just listening to your new chief intelligence officer in the United Kingdom in Britain. And the new head of MI6 just gave a speech about AI and sort of the, the guy she, she was summarizing the, sort of the threats in the world.

[00:18:31] And it was so interesting because she talked not just about sort of state actors and the kinds of things that you would imagine an intelligence chief would talk about, but she really talked about this, this technology, particularly as a threat to really are the, the, the, the reality of life. She talks about hyper personalized technologies posing on this real threat.

[00:18:52] And there, and it, it, she, she was speaking essentially about this, this mishmash of kind of private and public and corporate and civic powers that were sort of getting co-mingled in these various ways. And so for me, one of the themes I've just seen over and over again is, is that the, these companies are really have, are becoming nation states unto themselves.

[00:19:15] They, they have a kind of, you know, a constitution in many cases, a sort of charter that, that guides their decisions. That of course turns out to be a very squishy document that changes from quarter to quarter. They, you know, they, they recruit in the top minds in their field. I've met many, many, many academics who were, you know, at a university appointment one year. And then when I speak to them the next time they're working inside one of these companies, they're not allowed to talk to me anymore.

[00:19:41] There was a real sort of vacuuming up of the resources that once upon a time we associated with civic institutions, with government and universities and, you know, public facing taxpayer funded kinds of things. You know, increasingly these companies are standing in for that stuff. And that is, uh, been a real theme of the reporting that I've done, uh, in my life.

[00:20:02] And I would just say, you know, there, there is this belief, at least one that, I don't know if this is truly what they believe, but what they say they believe that these companies is we have the moral fiber, all the moral fiber one needs to govern this stuff. Let us make our own rules around how this stuff is deployed or not deployed. But what I've learned is that those, those instincts really change from year to year.

[00:20:30] And, and there, we've been through a period of time, you know, in the 2010s when it was the fashion in technology circles to, you know, talk about improving the world in this very sort of progressive way. And that was the marketing pitch they were giving to young people to recruit them to work there these days, that marketing pitch isn't even necessary anymore.

[00:20:49] And I find that that as a result means that the, the commitment they once made to the power of this stuff, uh, you know, the, the, to, to their value, their quote unquote values, um, suddenly turns out to be quite a bit more shaky than it, than, than it used to be.

[00:21:07] And so for me, it is the acquiring of the, the power, the horsepower of, of nation states without any of the kind of democratic inputs or, um, you know, constitutional checks and balances that we've, that we, that we enjoy when it comes to government. All of that is a, is a really disturbing trend for me. And as you said, those instincts change year after year.

[00:21:31] And if we go back just a couple of years in the loop, you included a section on how to fight back, which was incredible, but I'm curious if you were to write that chapter today in 2026 in a world shaped by chat GPT and AI agents and so many other things going on, what advice would you rethink or update in that chapter? Yeah, I didn't have, I guess I do two, I would want to change two things.

[00:21:55] One is I, I, one, one very sharp critic, um, you know, I got pretty good reviews by and large, even though this was at a time when AI was not yet a household word. And so a lot of people just didn't think this was yet a thing, but, but, you know, some critics, you know, what one critic said, um, this was just to me personally, that, that I spent too much time kind of luxuriating in the problem and not spending enough time, um, thinking about the solution. And, and, and I really take that to heart.

[00:22:22] And one of the things that I've learned is that, is that, you know, a very smart articulation of the problem is really just a tiny fraction of the job. Because one of the things that I really want to be doing here in this book is not just warning us against the dangers of amplifying the wrong parts of our circuitry, which is the thing that I spend so much of the time in the book explaining.

[00:22:44] But I really want to get to a place where I'm instead just talking about the very beautiful virtue of amplifying the best parts of being human. I just think our, our, our society, I'm a fundamentally, even though I'm such a bummer on this topic, I'm such a, uh, just fan of, of our capacity to solve problems and come together and live peacefully. I just think human beings are really, have really come so far in that stuff.

[00:23:13] And I really, you know, our creativity, our rationality, our caution, our ability to really think things through is, is one of our defining characteristics that in this wonderful way. And I really, I want to preserve that and protect that. And I, and so, so my, my vibe is much more about that on the other hand, where I guess not on the other hand, but my second point I would just make is I think I, I, I wish that I had.

[00:23:38] I've done a, a, a fiercer job of articulating a thing that I've really come to believe, which is that the, the illusion that, that individual humans that you and I and our listeners are, are somehow going to individually fight back against. This is really something that plays into the PR playbook of a lot of these companies.

[00:24:02] They really want us believing that it's somehow up to us to individually make a choice that somehow the, you know, our, that media literacy or something like that is going to somehow, you know, change things. I don't think that's true. I think true change and truly fighting back is, has to happen at a systemic level. It's gotta be a political movement.

[00:24:23] I think that candidates are, you know, here in the United States are already running on things like, Hey, maybe we shouldn't have big polluting, uh, energy consuming data centers right in our backyard. You know, they're running on, on these sorts of things all over the world. People are talking about the effect of this stuff on children and there's all kinds of political action moving around that. And at least here in the United States, there's an enormous amount of legal action going on in which people are truly suing to say you've caused damage here in this really big way.

[00:24:52] And I think it's those big systemic changes that are going to actually move the needle on this stuff. That's how we have, you know, that's why you and I are not currently smoking cigarettes as we speak to each other, Neil, right? It's why we have, you know, uh, safety belts in our cars. I mean, there's a whole systemic way of pushing back against, uh, you know, profiteers who, who, uh, you know, endanger us. And, and I think that fundamentally that's really the thing. And so, so for me, it's, it has to do with that.

[00:25:21] Now we can also, of course, limit our exposure to this stuff. Keep trying to reacquaint ourselves with the idea that this thing doesn't know you. It's not your friend. It's not your lover. It's not your guru, right? There are things like that out there. And we should certainly be trying to pass those understandings onto our kids, for instance. But fundamentally, I think it's going to have to happen. You know, moving these big nation state sized actors is going to take a nation state sized response. Yeah.

[00:25:47] It often feels like we're in somewhat of a paradox because one of the best defenses of staying employable is not losing our job. It's going all in on being human, honing our critical thinking skills. They are our defining characteristics, like you said. But the more we solely rely on AI, we reduce our critical thinking skills. And your work often connects technology with human psychology.

[00:26:09] And I'm curious, from your vantage point here, what does our willingness to almost outsource our thinking to machines reveal about us as people, not just as users of technology? I mean, I think it's just it's crucial to just recognize that that is a very ancient, very deeply rooted piece of circuitry that our brains are built to outsource our decisions wherever possible because we just don't want to have to think it through ourselves.

[00:26:34] It is how we've stayed alive on the plains of what was at the time Africa, you know, 200,000 years ago. But the next right phase of humanity was the phase that that in which humans said, oh, I wonder what's beyond the horizon over there. You know, I wonder what happens after we die. Like, what is these what are these dreams I'm having when I sleep?

[00:26:55] And all of that is the stuff that that got us up and moving and, you know, moving across the planet and building buildings and, you know, falling in love and coming up with rituals around love and all of that stuff.

[00:27:08] And so so understanding that our ancient most programming doesn't want to think it through and and that thinking it through properly in many cases is not going to even feel good because our reward centers aren't built around that is a really important part of the deal. But it also, you know, I think that we are, you know, you mentioned that that our creativity, right, is one of our big things.

[00:27:33] I was just I'll just leave you with this this anecdote that I experienced recently where a guy was talking to me at a dinner party and he was lamenting his daughter's choice of career. He said, man, I wanted her to be a lawyer or an engineer or something like that. And and he said, instead, she wants to be an artist. And he rolled his eyes. And I said to him, I don't know, man, like, have you seen what's going on with lawyers and engineers? Like they're getting fired in droves.

[00:27:56] But a truly creative, disciplined artist, someone who thinks creatively and has a, you know, a real process of thinking a thing through, bringing it into reality that doesn't require automated systems to do that for that person. I think those are going to be some of the most valuable people around. And so for me, it's protecting that that I think is going to be crucial to being the best possible versions of ourselves in the future.

[00:28:26] I love that. And there is a growing concern around AI, but to avoid us both luxuriating in the problem, there is also genuine optimism about what it can enable in medicine, education, creativity, et cetera. So how do you personally hold those two ideas at the same time with without falling into binary thinking or that trap of hype or fear? Yeah, for me, I just keep looking at the profit motive, right?

[00:28:50] I just always come back to how is someone best, you know, is this for a purpose higher than getting me into the money stream of this company? And when you start to think about it that way, you can see uses of this stuff that are really cool.

[00:29:11] You know, there are tremendous, you know, I've talked to people who are using this to investigate the missing, you know, the gaps in our archaeological record or people who are, you know, I know a guy who's trying to head off.

[00:29:27] He's basically trying to get people who have newly arrived in a new part of the country jobs that the state government of that part of the country can't fill and, you know, matchmaking people with that rather than that person having to go and apply for every job and hope for the best actually have the jobs come to that person. These are things that can truly happen here, but these are not money making enterprises. These are not things that that a venture capitalist is going to get excited about.

[00:29:55] And so for me, it is it is drawing that distinction. Like where can it truly like benefit people versus the rest of, you know, benefit just a small group of shareholders? That's that's the line that I draw. And finally, before I let you go, I always try to give listeners a valuable takeaway.

[00:30:14] So for any business leader, builder or everyday listener listening that feels that they're caught inside these feedback loops that we've talked about today, is there a practical habit or mindset shift that you think can help them stay more intentional about how they're using technology rather than almost being shaped by it? Well, I think if you familiarize yourself, even on the on the most basic level of with with this idea that our brains don't reward us for thinking it through.

[00:30:41] They are built to reward us for using instinct to make decisions or anything else we can outsource those decisions to.

[00:30:49] You suddenly realize, oh, there's a you know, the the the the feeling of satisfaction or feeling of of, you know, of, you know, the dopamine hit I'm getting off of, you know, using this stuff doesn't isn't actually a reliable indicator of whether or not it's it's making my people smarter or better.

[00:31:12] And so one thing that I'm talking about a lot right now is, you know, there's going to be this impulse to do away with the entry level work that a new university graduate is hired to do in your firm. You bring them in and they're, you know, someone who's who's, you know, looking for patterns in the data, right, is essentially it right there responding to emails or they are, you know, handling the filing, right?

[00:31:42] All of that kind of stuff that you would hire somebody in the bookkeeping, the clerical work, the admin work. They're going to be company after company after company is going to try and convince you that you don't need those people anymore. And it may be that you don't. But I think one important thing to remember is in a few years, you're going to need someone who has done all of that stuff in order to have a qualified manager or a qualified mid-level person, that person who for that third level job, not the entry level, not the second job, but that third level job.

[00:32:09] And so for me, I think that it is keeping the books ready for some kind of call it apprenticeship. If you like in the United States, in the medical field, they call it the residency, right? An on the job training that gives people practical experience of your firm so that they are valuable to you down the line.

[00:32:30] And that's going to be a really important thing because I think that between now and five years from now, we could really see a hollowing out of the skills we're going to need at the higher level of the workforce. And that's going to be a really important part of making sure that we don't let this stuff, you know, that we don't let the loop just grab onto us and take away some of the skills that we rely on. And I think that is a powerful moment to end on. But finally, obviously, you're a journalist, author, podcaster. You're a busy guy.

[00:32:58] So where can everyone listening find out more information about you, check out your podcast, your book, and continue this conversation we started today? Neil, you're such a gracious host. I am at theripcurrent.com. That is the podcast and the newsletter that I publish most days of the week. And I'm also by Jacob Ward, as if it's my byline, by Jacob Ward on all of the platforms. For some reason, TikTok is my big platform. I don't know why the kids want to listen to me. And then YouTube is also the – I'm that same handle there.

[00:33:28] And so, yeah, following me in any of those places is a huge favor to me. Thank you. Awesome. I'll have links to everything, including TikTok. If you've mastered the algorithm there, it can't all be bad. So I'll send people your way there. And thank you for joining me today. I'd love to stay in touch with you, see how this evolves throughout 2026, and get you on later next year as well. Let's see how this conversation will evolve. But thanks for joining me today. Thank you, Neil. Appreciate it.

[00:33:53] I think Jacob articulated something that a lot of people feel but struggle to put in words. That the most powerful effect of AI is not what it does for us, but what it quietly does to us. From narrowing creative possibilities to reshaping how we think, decide, and even trust. His perspective cuts through the hype and the fear in ways that feels deeply human.

[00:34:21] And I appreciate his honesty there about limits of individual resistance. The idea that it's not something that we can simply opt out of on our own, but something that requires a collective systemic response. And it felt like an important reframing for anyone wrestling with some of these questions today. So a big thank you to Jacob for challenging us all to think up more carefully about where this technology is taking us, and ultimately who gets to decide.

[00:34:48] And everyone listening, thank you for tuning in to another episode. As AI becomes more embedded in our work and personal lives, where do you still insist on thinking for yourself? And where might you be handing those choices over without even realizing it? So much to think about. But let me know. Techtalksnetwork.com. Find lots of ways to connect with me. I'm at Neil C. Hughes on socials. Let me know your thoughts. But that is it for today. So thank you for listening as always.

[00:35:17] And I'll speak with you all again tomorrow. Bye for now.