2882: From Chess Grandmaster to ML Innovator: Tal Shaked's Journey
Tech Talks DailyMay 03, 2024
2882
29:2623.57 MB

2882: From Chess Grandmaster to ML Innovator: Tal Shaked's Journey

Are machines really capable of thinking like humans, or are we merely programming them to mimic our own patterns? Today on Tech Talks Daily, we delve into this intriguing question with Tal Shaked, an American chess grandmaster and Chief Machine Learning Officer at Moloco, a leading machine learning performance solutions innovator.

In a world rapidly transforming through the application of machine learning and artificial intelligence, the distinction between additive and transformative technology becomes blurred. Tal offers a unique perspective on how ML is reshaping not just individual industries but the entire landscape of software engineering. Unlike previous advancements such as SQL and relational databases, ML's evolution seems to forge a new path, one that might redefine the essence of software itself.

Throughout the episode, Tal discusses the underappreciated art of ML engineering—a discipline that extends beyond traditional software engineering to balance robust infrastructure with high-quality outputs. This intricate balance is crucial for building systems that not only perform well but are also trustworthy and free from biases.

The conversation takes a deeper dive into why the tech community has focused intensely on AI, yet the broader concept of 'machine intelligence'—a perspective that acknowledges the fundamental differences between human and computer cognition—remains less discussed. Tal explores this oversight and its implications for future technologies.

We also address the practical aspects of eliminating bias in machine learning models. Tal shares insights into how engineers can refine their approaches to data to ensure fairness and accuracy in AI systems, and the skills that are becoming essential as ML grows more prevalent across sectors.

As we explore these complex themes, we invite you to reflect on the following: How can we ensure that the pursuit of machine intelligence does not lose sight of human values? Share your thoughts and join the conversation on how we can harmonize human creativity with machine efficiency.

[00:00:00] Is machine learning just an extension of traditional software engineering?

[00:00:06] Or is it carving out a new frontier in the tech landscape?

[00:00:11] Well today here on Tech Talks Daily I'm going to be joined by Tal Shackad and he is a chess

[00:00:18] grandmaster turned tech visionally at a company called Maloka.

[00:00:22] And he's going to be bringing with him today a unique perspective on the evolution

[00:00:27] of machine learning and its versioning role in our digital world.

[00:00:32] And as someone who has mastered the strategic complexities of chess, he now applies his analytical

[00:00:38] prowess to the field of machine learning engineering.

[00:00:42] And he's going to challenge us to rethink how we integrate machine intelligence in a

[00:00:46] human centric world and discuss the nuance differences between artificial intelligence

[00:00:52] and machine intelligence.

[00:00:55] We're going to learn a lot today so buckle up and hold on tight as I beam your ears all

[00:00:59] the way to the US.

[00:01:00] Well, Tal is waiting to join us today.

[00:01:04] So a massive warm welcome to the show Tal.

[00:01:07] Can you tell everyone listening a little about who you are and what you do?

[00:01:12] So I was a professional chess player up until the age of 20.

[00:01:15] That was back almost 25 plus years ago.

[00:01:18] And after that I took my first programming class in 1999.

[00:01:22] And right away I was fascinated with the ability for computers to use storage and compute to

[00:01:27] solve all kinds of problems that were really difficult for humans.

[00:01:30] And I started with computer chess, later wrote programs that played games and quickly

[00:01:35] got fascinated with AI, studied AI more than 20 years ago in grad school and quickly

[00:01:40] discovered that really all the action innovation was happening in the machine learning space.

[00:01:45] And since then, after doing a bunch of internships and joining Google full time about 20 years

[00:01:52] ago, I've been thinking about how to use machine learning to work on many interesting

[00:01:56] business problems across web search, advertising, recommendations, cars, vehicles and much more.

[00:02:02] And that journey has taken me to be the chief machine learning officer at Maloko today.

[00:02:08] Maloko is a company with a vision to empower businesses of all sizes to grow through

[00:02:13] operational machine learning.

[00:02:14] Currently we focus on performance advertising.

[00:02:18] And part of my role as a leadership team is to really think about our overall strategy

[00:02:22] in machine learning and to deliver as much value as possible to our customers.

[00:02:27] What a great journey you've been on and I've got to dig into your origin story a little

[00:02:32] bit before we go any further.

[00:02:33] A professional chess player at 20 and then going into the world of tech and programming

[00:02:38] and AI, etc.

[00:02:41] Was there any link between those stories?

[00:02:43] Do you think there's a lot of parallels between those two worlds?

[00:02:47] I think, you know, talk about this probably later every human is wired in a unique way

[00:02:52] just as computers are wired in a unique way.

[00:02:54] And for whatever reason, you know, I started playing chess at the age of seven.

[00:02:58] I just picked up the game really quickly.

[00:03:01] And you know, coach said, hey, you know, to my parents, hey, your son, you know,

[00:03:05] picking up the game really quickly, maybe you should get some lessons.

[00:03:08] And after getting some lessons, I ended up, you know, being one of the strongest

[00:03:10] chess players from my age, you know, from eight onwards and just really enjoyed playing,

[00:03:17] studying and getting better.

[00:03:19] And in a weird way, you know, by my mind operated, probably a little bit like a machine

[00:03:23] learning system, right?

[00:03:24] Where if I make good moves, I win, get rewarded if I make bad moves, I lose.

[00:03:28] And how can I keep playing and learning from those mistakes to get better over time?

[00:03:33] And it says, you know, machine learning is kind of similar, right?

[00:03:36] We have lots of data points with algorithms for how the system learns and it continues.

[00:03:40] And it learns to make better decisions over time.

[00:03:41] And so I think maybe that's how my mind worked.

[00:03:44] I just was really attracted to machine learning after playing chess.

[00:03:48] It's an incredibly cool story.

[00:03:50] And one of the things that put you on my radar was when I was reading about you,

[00:03:53] that you said, I think you consider machine learning engineering to be a

[00:03:57] superset of software engineering.

[00:03:59] So can you elaborate on that?

[00:04:01] And also the kind of challenges that might present in terms of balancing

[00:04:05] infrastructure with quality because I would imagine it's quite a fine balancing

[00:04:10] Yeah. So first of all, ML engineering has many definitions.

[00:04:14] So for me, I consider it to be the discipline of building ML powered software

[00:04:19] and products. And from that perspective, it's, you know, by definition,

[00:04:23] a superset of software engineering. To contrast this, I think of software

[00:04:26] engineering as typically referring to building traditional software

[00:04:29] products that are deterministic and predictable.

[00:04:32] You know, for example, building a data warehouse or payment system,

[00:04:36] just as clearly defined requirements and expectations and either

[00:04:39] it works or it doesn't. Right?

[00:04:41] Now, when you look at ML engineering, which builds upon this,

[00:04:44] you're building products for web search recommendation systems.

[00:04:49] And as we've seen, these systems are much more

[00:04:53] non deterministic and a bit unpredictable.

[00:04:55] And they kind of change as the user engages with them.

[00:04:58] And that's, you know, in my opinion, a really different kind of discipline.

[00:05:01] There's a lot of data science, dynamical systems, experimentation.

[00:05:06] In addition to all the software you need to actually build out these systems.

[00:05:09] And so the difference is, you know, in traditional software, you know,

[00:05:12] it's against deterministic, predictable.

[00:05:15] You just build a system so it works a certain way.

[00:05:18] While in these ML engineering problems like web search or chatbots,

[00:05:22] there's a quality component to it. Is the system creative enough?

[00:05:25] Is it hallucinating? Is hallucinating good or bad?

[00:05:28] And how does that, you know, depend on different users?

[00:05:31] So that's the quality perspective.

[00:05:32] I think that's what really makes these products difficult to design and build well.

[00:05:37] And I was also really about your, well, how you view the role of machine learning

[00:05:41] as transformative rather than merely additive in the,

[00:05:45] especially in the context of software development.

[00:05:47] So can you expand on that and maybe some of the implications

[00:05:51] that they might have for the future of the industry?

[00:05:53] Because the pace of change is moving at breakneck speed at the moment.

[00:05:57] It feels like everything is evolving before our very eyes.

[00:06:00] Yeah, so, you know, I joined Google 20 years ago

[00:06:03] and I was super excited to use machine learning to solve any problem I could find.

[00:06:07] But in many of these cases, those products and problems

[00:06:09] were kind of already well defined.

[00:06:11] And they're really just what I call additive ways at machine learning.

[00:06:14] Like, take, for example, a web search.

[00:06:15] You know, 20 years ago, web search was given a query,

[00:06:18] show a bunch of blue links and try to rank those blue links as good as possible.

[00:06:23] And so the additive way to add machine learning was to say,

[00:06:26] hey, instead of having a human right of function to rank those blue links,

[00:06:29] can we use a machine learning system to decide

[00:06:32] which of those blue links is the most relevant for a query?

[00:06:35] So that's an example, kind of an additive way to add machine learning.

[00:06:38] Now imagine that you have much more powerful machine learning.

[00:06:41] How can you actually rethink the product experience

[00:06:44] so it's more of a transformative change?

[00:06:46] And that's what we're seeing today with chatbots.

[00:06:48] Now the search experience isn't really just a query and a bunch of blue links

[00:06:52] or do we have these new capabilities that allows us to have a conversation

[00:06:56] and you can just extract the information from the web

[00:06:59] and present it to the user in a much more natural way.

[00:07:01] So that's one example of sort of additive versus transformative.

[00:07:04] Another one is an advertising, right?

[00:07:06] Your advertising has traditionally been more of a brand based advertising

[00:07:10] and, you know, show ad to users and hopefully that leads

[00:07:13] to better outcomes for your products.

[00:07:15] And then, you know, we started having ads at Google, right?

[00:07:18] And we might try to optimize for clicks

[00:07:21] and then later we try to optimize conversions

[00:07:23] and then we later optimize for lifetime value for a user.

[00:07:26] But, you know, there were still a lot of manual inputs

[00:07:28] like keywords and targeting criteria, what audiences you want to reach.

[00:07:33] But what's changing with machine learning is instead of saying,

[00:07:36] give me all this data and I'll match ads to users a little bit better.

[00:07:39] Just tell me where your objective is.

[00:07:40] You know, are you trying to find users that are just valuable for you

[00:07:43] and then let the machine learning system from the ground up

[00:07:45] figure out how to find those users that perform really well.

[00:07:49] So that completely changes the product experience

[00:07:52] for how advertisers provide input and how the system optimizes.

[00:07:56] With the evolution of technology, particularly, Al,

[00:08:00] how do you anticipate the landscape of software engineering

[00:08:04] will change over the next decade?

[00:08:05] And as I asked that, I know it's an impossible question

[00:08:08] because of just how much change we've seen in the last 18 months alone.

[00:08:11] But are there any trends or hints that you're seeing

[00:08:14] on how this will evolve in the years ahead?

[00:08:17] I remember having these discussions 16 years ago, right?

[00:08:20] And back in those days, you know,

[00:08:22] this kind of state-of-the-art machine learning was maybe

[00:08:25] large-scale logistic regression, gradient boosted decision trees.

[00:08:29] And at the time we were probably training on a trillion examples,

[00:08:32] had models with 10 billion parameters,

[00:08:35] and they were used for, you know,

[00:08:37] performance advertising, web search, recommendation systems.

[00:08:42] And some people wondered,

[00:08:44] are we going to standardize machine learning

[00:08:46] the way we standardize databases,

[00:08:48] where we have SQL for databases?

[00:08:50] And people running real will be a bunch of, you know,

[00:08:53] similar tools that everyone knows how to use.

[00:08:56] And then deep learning came around and everything really changed a lot.

[00:08:59] So I think it's extremely hard to predict

[00:09:01] how machine learning is going to evolve.

[00:09:03] In my opinion, you know, I think machine learning is going to evolve

[00:09:06] kind of like software has over the last 50 years.

[00:09:08] And so in that perspective, I think of machine learning

[00:09:11] as really the future of software, right?

[00:09:14] This doesn't mean that every engineer

[00:09:15] is going to spend all their time just building models,

[00:09:18] but instead we're going to rethink sort of our design patterns,

[00:09:21] how we build software to support machine learning powered products.

[00:09:26] Right? And this will change the way we manage our data,

[00:09:29] the way we test and evaluate products,

[00:09:32] the way we do iterations, learn from our users.

[00:09:36] And we've kind of already seen this of the big tech companies

[00:09:39] who have been doing this for I think about 10 to 20 years.

[00:09:41] And I think now with more

[00:09:44] machinery technologies being open sourced,

[00:09:46] more compute capabilities being available in the cloud,

[00:09:49] many of the smaller companies, you know, particularly like Maloko,

[00:09:51] are really building these operational ML systems.

[00:09:55] And it takes a lot of work

[00:09:56] and we're gradually making it easier and easier over time.

[00:10:00] And you strike me as someone with a strong eye for detail.

[00:10:03] And one of the reasons I say that is I read that you mentioned before

[00:10:07] that there's a general lack of appreciation

[00:10:09] for what it truly means to do quality work in ML engineering.

[00:10:13] So can you discuss what quality looks like in this field

[00:10:17] and how it can be assessed and maintained?

[00:10:20] Because I get the impression

[00:10:21] this is something you're very passionate about, right?

[00:10:24] Yeah. No, I think ML engineering is a discipline

[00:10:27] is a super set of software engineering.

[00:10:29] We talked about that earlier.

[00:10:30] So let me just kind of illustrate this with a few examples.

[00:10:33] So probably the biggest learning experience

[00:10:35] I had about machine learning was when I started working

[00:10:39] on web search, trying to convince everyone that they should

[00:10:42] use machine learning to rank results rather than thousands

[00:10:45] of lines if else logic.

[00:10:47] And of course, most people weren't bought into this back then.

[00:10:50] And they had really good reasons.

[00:10:51] They said, what happens if the results are bad?

[00:10:53] How do we change the system to get to do what we want to do

[00:10:56] so that has a better user experience, higher quality results?

[00:10:59] So let me give you a concrete example that's kind of amusing.

[00:11:02] So if you search for the query miserable failure

[00:11:06] a long, long time ago, if you did that,

[00:11:07] you'd probably get some definition of miserable failure.

[00:11:10] Maybe there's a video called miserable failure.

[00:11:13] And then later you actually got a result

[00:11:16] that was the homepage of George Bush.

[00:11:18] Now, how is it that the query miserable failure returns

[00:11:21] the webpage of George Bush?

[00:11:23] Is that a good result or a bad result?

[00:11:25] Well, it turns out that users were able to reverse engineer

[00:11:29] how the Google Raking System worked and were able to create

[00:11:33] web pages and links with the word visible failure

[00:11:36] that pointed back to the website of George Bush

[00:11:39] and that and the system thought, oh, that looks like

[00:11:41] a really relevant result based on what I see in the web.

[00:11:44] So let me show that after that happened.

[00:11:46] Of course, people change the algorithm manually to avoid that.

[00:11:50] And then we search for Google miserable failure.

[00:11:52] You got a new result which is called Google bombing,

[00:11:54] which was the technique users used to find this.

[00:11:57] So the point is that for the query miserable failure,

[00:11:59] the best results kept changing over time as the world changed.

[00:12:02] And that's really a quality problem.

[00:12:04] It's not that there's an engineering challenge of just how

[00:12:06] to build a system to work correctly.

[00:12:07] It's what is the intent of the user and how do I give them

[00:12:10] the right results?

[00:12:11] And when I don't give those right results, how do I change

[00:12:14] the features, the algorithms, whatever might be to get that?

[00:12:18] And pretty much I think every modeling problem has that.

[00:12:21] You know, we see this today with with chat box, right?

[00:12:24] Are hallucinations good or bad?

[00:12:25] Well, it really depends on the context.

[00:12:28] Is the system sounding too robotic or is it interesting

[00:12:31] and creative?

[00:12:31] I want to spend more time talking with it.

[00:12:33] And those are really quality problems because you can't

[00:12:35] just get a list of requirements, build a system

[00:12:37] and say, I'm done.

[00:12:38] You really need to sort of iterate with people, with the real

[00:12:41] world to gradually improve the quality of the overall

[00:12:45] experience.

[00:12:46] And if we scroll down our news feeds right now, there are

[00:12:49] so many different stories and we're bombarded daily with

[00:12:52] talk of AI, ML and even AGI is adoring the conversation

[00:12:57] now.

[00:12:57] But the term machine intelligence is not as commonly

[00:13:01] discussed compared to ML and AI, for example.

[00:13:04] So what is your definition of machine intelligence?

[00:13:08] And why do you think the tech community should shift its

[00:13:11] focus more towards this concept?

[00:13:13] Because something we don't hear about enough.

[00:13:16] Yeah, this is a very interesting question.

[00:13:19] You know, technically AI artificial intelligence is

[00:13:22] a field that's been defined from a long time ago as a

[00:13:25] subset of computer science.

[00:13:29] But dream your hand and take it more literally, which is

[00:13:31] artificial intelligence.

[00:13:32] What does intelligence mean?

[00:13:33] According to Google, it's the ability to acquire and apply

[00:13:36] knowledge and skills.

[00:13:37] And clearly the way that humans do that and the way

[00:13:40] that machines do that is different.

[00:13:42] Similarly, you know, it's the definition of artificial,

[00:13:44] you know, well, according to Google, it's made or produced

[00:13:47] by human beings rather than occurring naturally.

[00:13:50] This makes me wonder, well, yeah, we consider we

[00:13:52] consider playing chess to be some form of intelligence,

[00:13:55] but I can tell you from personal experience that there

[00:13:57] was nothing natural about becoming a chess

[00:13:59] grandmaster.

[00:13:59] It took a lot of very hard work, very deliberate

[00:14:03] practice to kind of reach that.

[00:14:06] And you know, I think I would say that humans are

[00:14:08] designed to do that.

[00:14:09] Well, it was sort of out of my way to learn how to

[00:14:11] do that.

[00:14:12] And so to me, I think really the distinction I

[00:14:14] think about is, you know, human intelligence versus

[00:14:17] machine intelligence.

[00:14:19] And the point here is that humans and machines,

[00:14:21] they learn in very different ways.

[00:14:23] And similarly, they have very different strengths.

[00:14:25] For example, computers have almost infinite storage

[00:14:28] and almost, you know, very, very fast computational

[00:14:31] capabilities, much so more than humans.

[00:14:34] In fact, then we see this, right?

[00:14:35] If you want to have a system that can look at pictures

[00:14:38] of dogs and classify them into the right breeds,

[00:14:41] that's hard for humans.

[00:14:42] It's, you know, nowadays it's almost trivial for a

[00:14:44] machine, right?

[00:14:45] It's a different kind of problem.

[00:14:46] Same thing like basic computation, right?

[00:14:48] So to me, I think these are just really different

[00:14:50] kind of intelligences.

[00:14:52] And what we've been working on, you know, over the

[00:14:54] last, I mean, I've been involved in the last 20

[00:14:56] years is how do you teach or use machine learning

[00:15:00] to leverage the different kinds of intelligence

[00:15:02] that computers have so that they can solve or make

[00:15:06] progress on very interesting problems, particularly

[00:15:08] problems that humans are interested in.

[00:15:09] Right?

[00:15:10] And that's what we're seeing now is we're seeing

[00:15:12] many advances in machine learning technologies.

[00:15:15] And that's enabling us to better leverage,

[00:15:19] you know, the intelligence of computers to

[00:15:21] create new forms of machine intelligence that

[00:15:23] now humans can engage with an interesting way,

[00:15:25] which we see through generative AI and other

[00:15:28] kind of applications.

[00:15:29] But behind the scenes in the past, machine

[00:15:31] learning was created, was creating a ton of value,

[00:15:34] again, in performance advertising and

[00:15:37] recommendation systems and classification and

[00:15:41] other optimization fields.

[00:15:42] But those are often behind the scenes in a way

[00:15:44] that were a surface to like everyone, like

[00:15:47] we are regenerative AI.

[00:15:48] And considering the unique attributes of

[00:15:51] computers versus human intelligence, what do

[00:15:54] you think are some of the key areas where

[00:15:57] machine intelligence could possibly uniquely

[00:16:00] contribute beyond human capabilities?

[00:16:02] Is anything that excites you there that you

[00:16:04] can talk about and share?

[00:16:06] Yeah.

[00:16:07] So, I mean, we're already seeing a lot of this

[00:16:09] because I think, as I mentioned, the ability

[00:16:12] to manage a lot more information and

[00:16:15] being able to compute much more quickly,

[00:16:17] we're seeing computers or machine

[00:16:19] intelligence being able to solve problems

[00:16:22] or assist humans in solving problems better

[00:16:24] than humans can do on their own.

[00:16:26] Right?

[00:16:26] And I'll give an example again for one from

[00:16:28] chess, just because I was very close to home.

[00:16:31] Right?

[00:16:31] So when I started playing chess, there weren't

[00:16:33] really any good chess computers.

[00:16:35] You know, I had a few, I learned how to

[00:16:37] beat them and then it got kind of boring.

[00:16:39] Right?

[00:16:39] Then over time through machine learning,

[00:16:41] we generated computers that learned how to

[00:16:44] play chess through self-play and became

[00:16:46] extremely strong, much stronger than the

[00:16:48] strongest humans.

[00:16:49] And what we're seeing today is that

[00:16:51] because we have these chess-playing computers

[00:16:53] that are much stronger than before,

[00:16:54] humans themselves are becoming much better

[00:16:56] at chess at earlier ages.

[00:16:58] And as they get older, they become

[00:17:00] even better chess players now than

[00:17:02] we had in the past.

[00:17:02] Right?

[00:17:03] And why is this?

[00:17:04] Well, it turns out that, you know,

[00:17:06] because chess computers are able to

[00:17:07] play better moves and sort of intuitively

[00:17:09] spline those moves to a human by

[00:17:11] playing out different ideas,

[00:17:13] humans are actually able to learn

[00:17:15] chess better and faster than they were before.

[00:17:17] Right?

[00:17:18] So to make a simple case of, you know,

[00:17:20] computers or machine intelligence

[00:17:22] helping humans become better at something.

[00:17:24] But I think what's much more interesting than

[00:17:25] me, you know, again, this is from

[00:17:27] playing games is other fields.

[00:17:29] Right?

[00:17:30] So, you know, in advertising,

[00:17:33] I think that this is really the ability

[00:17:34] to match users with products

[00:17:37] that they'd be interested in.

[00:17:39] And this is a challenging problem.

[00:17:40] There are so many products out there.

[00:17:42] People are so busy.

[00:17:43] People spend time on the web searching for stuff.

[00:17:45] It's hard to discover everything.

[00:17:47] And so advertising is kind of a discovery

[00:17:49] mechanism to help these users

[00:17:51] find those products that they'll be interested in.

[00:17:53] And the number of products out there

[00:17:55] and the way you personalize to users,

[00:17:57] it's just immense.

[00:17:57] You know, this is a problem that humans

[00:17:59] can't really solve.

[00:18:00] But we saw this right?

[00:18:01] They try to do brand advertising,

[00:18:02] audience based targeting.

[00:18:04] And those approaches simply just do not work

[00:18:06] as well as performance advertising.

[00:18:07] And I personally have gotten better

[00:18:09] products as a result of this.

[00:18:10] And so is my wife.

[00:18:11] You know, another area I think looking

[00:18:12] forward that I'm really excited about,

[00:18:14] I would say, you know, is a health care

[00:18:17] and education. Right?

[00:18:17] I mean, we know with health care that

[00:18:20] there's so much data out there

[00:18:22] about different kinds of diseases,

[00:18:24] illnesses, symptoms.

[00:18:25] And there's no doctor that's able

[00:18:27] to understand all that and retrieve

[00:18:29] that information instantly and apply it

[00:18:31] to any kind of any patient they see.

[00:18:34] But machine learning and, you know,

[00:18:36] machine intelligence can actually do this over time.

[00:18:38] I think and we're making progress.

[00:18:40] Right? We're already seeing many cases

[00:18:41] in terms of understanding DNA,

[00:18:43] making diagnoses and even some stories

[00:18:46] about, you know, chatbots helping diagnose

[00:18:48] problems for dogs or people, you know,

[00:18:50] better than doctors could or vets.

[00:18:52] And so I think health care is going to be

[00:18:54] an area where this ability to process

[00:18:57] all kinds of data and make better sort

[00:18:59] of inferences on on diagnosis

[00:19:01] is really going to be transformed.

[00:19:03] Another one is education.

[00:19:04] Right? You know, I think many studies

[00:19:06] have shown that, you know,

[00:19:08] every person is different.

[00:19:09] If you teach through the average,

[00:19:11] you're probably not teaching any single

[00:19:12] person as well as possible for how they learn.

[00:19:15] And in fact, I think private tutoring is one

[00:19:17] of the best ways that people can learn.

[00:19:18] And I went through that.

[00:19:20] I mean, I'm obviously in chess,

[00:19:21] but even when I was younger, you know,

[00:19:22] I think I had trouble reading

[00:19:24] and I had some private tutoring there.

[00:19:25] I just I think my mind was wired a bit differently.

[00:19:28] But imagine what we can do if we had a,

[00:19:32] you know, machine intelligence or, you know,

[00:19:34] intelligent system that could customize

[00:19:36] the way they teach to every single person or kid.

[00:19:39] I think people could learn much faster

[00:19:41] and learn the way that's best for them.

[00:19:43] So I think those are just two areas.

[00:19:44] But I think really there's I think every industry

[00:19:47] is going to be transformed through kind of new

[00:19:49] capabilities of machine intelligence,

[00:19:51] especially humans and machines working together.

[00:19:55] What's your percent with you?

[00:19:56] Love that.

[00:19:57] At Maloko, your mission is to empower

[00:19:59] businesses of all sizes to grow

[00:20:01] through operational machine learning.

[00:20:03] So I've got to ask, how does Maloko

[00:20:06] integrate these advanced concepts of ML

[00:20:09] and machine intelligence into your products

[00:20:11] and services?

[00:20:12] And what kind of challenges have you encountered

[00:20:14] on this integration?

[00:20:16] Yeah.

[00:20:16] So this is a great question.

[00:20:18] I mean, this really gets at the core

[00:20:20] of what it means to be an ML first company.

[00:20:23] Right.

[00:20:24] So so there's a few parts, right?

[00:20:25] There's, you know, we can talk about

[00:20:27] what we actually do from a machine learning

[00:20:28] perspective to deliver performance advertising.

[00:20:31] And, you know, roughly speaking,

[00:20:33] we want to show, you know, the right add

[00:20:35] to the right user at the right time for the right price

[00:20:38] to kind of maximize, you know,

[00:20:40] the relevance to the user so they get value out of

[00:20:43] possibly putting on that ad and buying that product

[00:20:46] and also to the advertise themselves

[00:20:48] so they get more high value users.

[00:20:50] Right.

[00:20:51] So how do we do this?

[00:20:52] We need to build models.

[00:20:53] We need to build models that understand

[00:20:55] what that product is about, that understand

[00:20:57] the interests of the users that can predict

[00:21:01] whether if we show this ad to this user

[00:21:04] will they in our case install that app.

[00:21:06] And if they install that app, will they engage

[00:21:09] with that app a lot, find value with that app

[00:21:11] and maybe take actions on that app.

[00:21:13] If they do that, how much will they use

[00:21:16] that app over time?

[00:21:16] How much value would they create for themselves?

[00:21:18] And also how much value will they create for the

[00:21:20] for the advertiser?

[00:21:21] And so we build models that make all of those predictions

[00:21:24] and by making those predictions, we're able to

[00:21:27] target relevant ads to those users

[00:21:29] and ensure that users will find a lot of value

[00:21:32] from those the products that they advertise.

[00:21:35] So yeah, so that's kind of concretely

[00:21:36] how we use machine learning to create value

[00:21:39] and again, and then enable those businesses to grow.

[00:21:42] But beyond that, there's how do you

[00:21:44] organize your teams?

[00:21:46] How do you build the right kind of environment,

[00:21:48] the right infrastructure, the right processes

[00:21:50] so that you can build the ML powered products

[00:21:52] really quickly and efficiently?

[00:21:54] And that's something else that I put a lot of

[00:21:56] thought in having, you know, worked at many companies,

[00:21:59] you know, not just Google, but also Lyft and Snowflake.

[00:22:02] I think this is the challenge that

[00:22:04] I guess they've been working on for the last 20 years.

[00:22:07] You know, it started in a project

[00:22:09] called Cyple at Google where we want to just

[00:22:12] build the world's best large scale

[00:22:14] machinery system and we realized that

[00:22:16] we could build more of a machine learning platform,

[00:22:19] an environment that makes it easy for people

[00:22:21] to develop machine learning powered products.

[00:22:24] And that that became a system that is used by

[00:22:27] hundreds of teams.

[00:22:28] And then when deep learning came around,

[00:22:29] we've all that to tangible extended.

[00:22:31] But that was really built primarily for Google, right?

[00:22:34] I think now that I'm outside the Google bubble,

[00:22:36] working with open source,

[00:22:38] cloud technologies, it's really interesting to think about,

[00:22:41] well, how do we build the right kinds of

[00:22:44] machine learning platforms

[00:22:46] for companies like Malogo to build ML powered products

[00:22:50] and do that really quickly and efficiently?

[00:22:52] And so I think that's sort of at a more extract level.

[00:22:55] Really what I'm excited about, what I've been working on for 20 years

[00:22:58] and you know, one of the really interesting things about

[00:23:01] the work at Malogo is I'm doing that,

[00:23:02] but focusing initially in the context of performance advertising.

[00:23:05] But over time, hopefully that would span to other

[00:23:09] really interesting applications and machine learning

[00:23:10] that will create a lot of business value for future customers.

[00:23:14] Well, we started this podcast talking about your origin story,

[00:23:18] 20 year old professional chess player before you embarked

[00:23:22] on your journey in tech.

[00:23:23] And as we come full circle, I want you to look back now

[00:23:26] over your shoulder at your entire career

[00:23:29] and share the funniest or most interesting story

[00:23:32] that has happened in your career,

[00:23:34] because I would imagine that in that time

[00:23:36] you've picked up more than a few of those,

[00:23:37] but you probably can't share too many of them.

[00:23:39] Is there one that we can share today?

[00:23:42] You know, yeah, I'll share sort of

[00:23:46] my breakthrough into sort of like this,

[00:23:48] the Silicon Valley and high tech companies like Google.

[00:23:52] So, you know, as an undergrad,

[00:23:53] I went to the University of Arizona.

[00:23:55] My dad was a professor there.

[00:23:57] And the University of Arizona is

[00:23:59] known particularly for its computer science departments.

[00:24:01] Maybe it's known more for being a party school.

[00:24:04] And so it was a little bit difficult for me to kind of break

[00:24:07] into Silicon Valley and the top tech companies.

[00:24:10] I kept applying to different opportunities.

[00:24:13] And I'd say the closest I got was I did an internship

[00:24:15] at Tula Packard in Boise, Idaho in 2001.

[00:24:19] And in 2002, I applied to a bunch of places

[00:24:21] and again, I was about to go to your second internship at HP.

[00:24:24] And then one day someone from Intel called.

[00:24:27] They didn't reach me. They reached my mom.

[00:24:29] When I sent her to your son's resume,

[00:24:32] I noticed he's a professional chess player

[00:24:34] or was a professional chess player.

[00:24:36] I want to talk to him about internship.

[00:24:39] And my mom was super excited because she knew I wanted to go to Silicon Valley

[00:24:42] and actually talked to this person for a long time.

[00:24:45] She even started singing to this person in French,

[00:24:47] thinking that would help increase the chances that I'd get internship there.

[00:24:50] I don't know if it increased the chances.

[00:24:52] But anyway, eventually I got the message

[00:24:54] and I ended up talking to this person.

[00:24:56] And the person chatted with me, asked me a bunch of questions.

[00:24:58] This is Intel. We're doing chip design.

[00:25:01] They said, you know, tell you sound really smart.

[00:25:03] You know, you're a professional chess, you were a professional chess player.

[00:25:05] That's that's cool.

[00:25:07] But, you know, I've got all these other candidates

[00:25:09] that are doing PhDs and research in exactly this field.

[00:25:13] I just don't know if you're going to be the right fit for this.

[00:25:15] So we chatted for a while and the first thing that, you know,

[00:25:17] I'm going to think about on you.

[00:25:18] And so that was my internship,

[00:25:19] my first internship in Silicon Valley at Intel.

[00:25:22] And it was all because I'm being a chess player

[00:25:23] and that kind of differentiated me from other people.

[00:25:27] And then in 2003, pretty much the same thing happened.

[00:25:30] I applied to Google and Google was a small company back then.

[00:25:33] I think they had 30 interns total.

[00:25:36] And the mentor that I spoke with,

[00:25:38] that he actually became my mentor, he said, hey, you know,

[00:25:40] I'm a pretty serious go player.

[00:25:43] It's really cool that you've been a professional chess player.

[00:25:45] I mean, I've never had an intern like that before.

[00:25:47] We'd like to do an internship at Google.

[00:25:49] And that's kind of how I got my second opportunity

[00:25:52] at Silicon Valley at a top tech company.

[00:25:55] And so, you know, that was, I mean,

[00:25:57] chess was really differentiator.

[00:25:58] And what's amusing about this is, you know,

[00:26:00] things have changed so much now when I talked to kids

[00:26:02] trying to figure out to get into college and get internships.

[00:26:05] It's like, oh, everyone's getting 6,100 SATs.

[00:26:07] It's hard to differentiate myself.

[00:26:09] And so it's like, well, back when I did this,

[00:26:11] it was playing chess, you know, I don't know.

[00:26:13] Today it's a completely different challenge,

[00:26:15] but I had it not been for chess.

[00:26:16] Yeah, who knows?

[00:26:17] I might be doing something very different,

[00:26:20] not in the machine learning space, not in Silicon Valley.

[00:26:22] Wow, what an incredible story.

[00:26:24] It reminds me of the Steve Jobs quote,

[00:26:26] you know, where you can't join up the dots looking forward.

[00:26:29] When you were a 20-year-old chess player,

[00:26:31] you wouldn't have been able to do that.

[00:26:32] But looking back now, you can join up those dots easily.

[00:26:35] A great story.

[00:26:36] And before I let you go, for anyone listening,

[00:26:38] just wanting to find out more information

[00:26:40] about Meloko, the work you're doing,

[00:26:42] connect with you or your team,

[00:26:44] where's the best starting point for everything?

[00:26:46] You know, in terms of Meloko,

[00:26:47] I mean we have, you know, a website online.

[00:26:50] I think it describes some of our products.

[00:26:51] In terms of reaching out to me,

[00:26:53] you know, I'm on LinkedIn,

[00:26:54] I've been engaged with other people.

[00:26:55] They're also quite interested in sort of the entrepreneurial space,

[00:26:59] you know, many different companies

[00:27:01] founding new, new companies,

[00:27:03] developing new ML-powered products.

[00:27:06] So I'm actually a bit of an angel investor

[00:27:08] in that space as well.

[00:27:09] And so happy to worry about what people are working on

[00:27:12] and seeing how I can help.

[00:27:13] Also, well, I will add links to both the website

[00:27:17] and your LinkedIn,

[00:27:18] so make it easy for anybody listening to connect with you

[00:27:21] and carry on this conversation we started today.

[00:27:23] And we covered so much from the ML machine learning

[00:27:27] as a discipline, the future of software,

[00:27:30] people talking about ML and AI,

[00:27:33] but no one talking about machine intelligence.

[00:27:35] It's just such a rich topic

[00:27:36] that we could talk about for hours.

[00:27:38] But more than anything,

[00:27:39] just thank you for taking the time out of your busy day

[00:27:41] to sit down and share your fantastic insights

[00:27:44] and great story too.

[00:27:45] Thanks again.

[00:27:46] Thank you, Neil.

[00:27:47] I really enjoyed the conversation.

[00:27:49] And yeah, we'll ask some really great questions.

[00:27:51] And as you can tell, I'm really passionate about this space

[00:27:54] and love engaging with anyone on these topics.

[00:27:57] So as we conclude today's deep dive,

[00:28:00] I think we touched on a few provocative ideas.

[00:28:04] Ideas about the future of machine learning

[00:28:06] and its distinction from traditional software practices.

[00:28:10] And I think his insights remind us that technology evolves.

[00:28:14] That as technology evolves,

[00:28:17] so must our understanding and methodologies.

[00:28:20] So the big questions are what implications

[00:28:23] does this shift have for future innovations?

[00:28:26] How should industries adapt

[00:28:28] to embrace the full potential of machine intelligence

[00:28:31] without falling into the trap of human centric biases?

[00:28:36] It's a big topic, this one,

[00:28:37] and one we can't solve in a 30 minute podcast.

[00:28:39] But I'll happily carry on the conversation with you.

[00:28:42] You can email me techblogwriteroutlook.com,

[00:28:45] Twitter, LinkedIn, Instagram, just at Neil Seaheuse.

[00:28:49] Let's keep this conversation going.

[00:28:51] But I'm afraid we've reached the end of our time together today.

[00:28:54] So share your views.

[00:28:56] Join me tomorrow morning on Tech Talks Daily as once again,

[00:29:00] we'll continue to explore the cutting edge intersections

[00:29:03] of technology and most importantly, human ingenuity.

[00:29:08] Well, thank you for listening.

[00:29:10] And until next time, don't be a stranger.