AI is quickly moving from boardroom buzzword to boardroom headache. Enterprises are waking up to the fact that bringing large language models in-house is not just about performance or cost, but about control, accountability, and trust. In this episode of Tech Talks Daily, I sit down with Octavian Tanase, Chief Product Officer at Hitachi Vantara, to unpack what this shift really means for business and technology leaders.
Octavian explains why governance has become the defining challenge of the AI era. Companies are under pressure not only to innovate but also to meet new regulatory demands and maintain trust with customers. That requires more than patching together tools or hoping for transparency from public AI providers. It means creating governance frameworks that deliver traceability, auditability, and explainability as standard practice, not as afterthoughts.
We explore why vector databases may need something like a time-machine capability to document when and how information is added, giving enterprises a provable audit trail. This level of accountability supports both internal oversight and external compliance, turning abstract AI ethics debates into real operational requirements.
Our conversation also turns to the role of infrastructure. Hitachi Vantara’s VSP One, with its tagline “One Data Platform, No Limits,” has been built to simplify data complexity across block, file, and object storage while providing a unified foundation for AI workloads. Octavian shares how this unified approach helps enterprises run compliant, explainable, and efficient AI across hybrid environments that span both on-premises and the cloud.
This isn’t just a story about technology, but about the future of trust in digital business. If AI remains a black box, its value will always be limited. If it becomes explainable, traceable, and accountable, it can transform not only efficiency but also relationships with customers, regulators, and partners.
So, how can leaders strike the right balance between governance and innovation without slowing down progress? Octavian leaves listeners with a forward-looking perspective on what the next few years of enterprise AI will demand, and why those who build on strong governance today may end up with the most resilient advantage tomorrow.
Useful Links
[00:00:00] Now, before we get started today, a quick thank you to today's sponsor, Careerist. And if you've ever thought, I'd love to pivot into a career in tech, but I just don't know how to code, then this is for you. Because Careerist QA Bootcamp is actually one of the most accessible ways to break into a tech career. You'll get live training from experts at companies like Google and Meta, a remote internship that you can put on your resume,
[00:00:28] and a one-on-one career coaching to help you actually land the job of your dreams. And the demand is huge too. There are over 37,000 QA openings in the US alone right now, with starting salaries of up to $105,000. And graduates are working in 42 different states, and the program is rated 4.7 stars on Trustpilot.
[00:00:56] And best of all, if you don't get hired within a year, you get your tuition fees back. Seems like win-win to me. But the only catch is the seat for the next cohort are closing soon. That's why I'm working with Careerist to get the message out. So please check the link in the show notes to grab your spot. And now, let's get today's conversation started with my guest. Welcome back to the Tech Talks Daily Podcast.
[00:01:25] Now, before we dive into today's conversation, I just want to take a moment to thank all of you for your support. Not only on Tech Talks Daily, but over the Tech Talks network that was launched this year. We're now the home to eight podcast shows, and the journey so far has been incredible. We currently have 250 interviews booked in until the end of 2025. And I've also got some pretty exciting news about where we're heading next.
[00:01:52] But I'm going to leave you with that teaser for now, because today's episode is all about a challenge that's facing almost every enterprise that is exploring AI. Because Octavian is joining me today. He's the Chief Product Officer at Hitachi Vantara. And together, we're going to unpack what I'm calling the corporate AI dilemma. Building trust through governance.
[00:02:16] Because as more and more organizations are bringing large language models in-house rather than relying on public options, and driven by the need for stronger control, traceability, and compliance, I want to learn more about what meaningful AI governance looks like in practice, on the ground, in the real world right now. And why vector databases might need a time machine to track when and how data changes.
[00:02:44] And how explainable AI has moved from a nice-to-have to a business necessity. And yet, we'll also look at how Hitachi Vantara's VSP1 platform is tackling the complexity of managing data securely and consistently across hybrid environments. And what does all this mean for AI workflows and regulatory demands? These are just a few of the things we're going to tackle today.
[00:03:09] So, whether you are a technology leader, a data architect, or simply someone curious about the future of trustworthy AI, there's going to be a lot for you to think about here today, and hopefully a few valuable takeaways for you. But enough from me. Let me introduce you to Octavian right now. So, a massive warm welcome to the show. Can you tell everyone listening a little about who you are and what you do? Good morning. Happy to be here. My name is Octavian Tunase.
[00:03:40] I'm the chief product officer at Hitachi Vantara. Well, thank you so much for sitting down with me today. So much I want to talk with you about. If you look at our news feeds over the last few months, we've seen more and more enterprises bringing large language models in-house, rather than relying on public options. And we all know why, I think. But over to you. What's behind that shift from what you're seeing? And how is it changing the way companies approach AI adoption?
[00:04:06] Because it began with a cautious approach, but there seems to be a way forward. But what's happening now? What are you seeing? Well, I think there's a reason why everybody brings in AI into their enterprise. And fundamentally, what it's enabling people to do, it's improved productivity. So they see that as an opportunity to get their engineers, their marketeers, everybody else, the sales organization, to be more productive.
[00:04:33] And with that comes some concerns. And some of the concerns are that many of the large language models or AI tools or agentic AI that people use may leak into the public domain your proprietary information. So people want to use AI. They have some governance concerns. And governance, it's evolving right now.
[00:05:00] There are some UNESCO initiatives around ethics. There's an executive order on cybersecurity. There are local, let's say, city of San Jose governance on AI guidelines. So the guidelines continue to evolve. The legislation is very nascent and will continue to evolve.
[00:05:25] And people want to use AI in a way that it's both compliant with the governance that exists in different regions where they operate, as well as they want to use it in a safe way where their newer proprietary information is not being disclosed into the open source.
[00:05:47] And there's also a lot of talk around AI governance right now as it matures, but it often still feels incredibly abstract. So if we were to step away from the hype and everything we see in keynotes and at tech conferences, I'm curious, what does meaningful governance look like on the ground? And how are companies starting to embed things like traceability, auditability into their AI systems? What are you seeing here? So I'll take an example from the engineering realm.
[00:06:17] And that will probably resonate with a lot of the techies. We use a lot of open source in our products, right? And there are many reasons for that, right? Open source is rapidly evolving and innovating in many areas, right? So there are some key projects like Apache Iceberg or Istio, Kubernetes.
[00:06:47] Use of open source is ubiquitous. But open source has different type of licenses. So how do you know whether you're using inner products open source in the right way? In the past, people would scan their code and look for viral licenses or code that is related to viral open source licenses and try to prevent it that way.
[00:07:15] But now many people are using AI to generate code. And that code might have been trained on open source projects that you have to be careful and you don't want to taint your code with. So how do you do that? So there is probably a better way to do this. It's probably you can start with a pre-trained large language model.
[00:07:45] And you have the opportunity to fine-tune, to train that, to do RAG on that model with the type of, let's say, sources and bias your large language model for use, for right use, for appropriate use of open source technology.
[00:08:05] Now, you still have to do your scans at the end, but you're more likely to generate code that may not be tainted by open source viral licensing. And when I was researching here, I was also reading that you've mentioned the idea of vector databases need a time machine capability of sorts.
[00:08:26] So can you tell me a little bit more about why documenting when and how new information enters the system is so important, not just for internal oversight, but also regulatory compliance too? What happens is at some point, somebody in the company governance or somebody externally may come in and question your use of AI.
[00:08:52] So, you know, earlier we talked about the fact that many people in the enterprise will take a large language model, will bring it into their enterprise, will fine-tune that, will train it with new and appropriate information. But how does one know with which data points you've trained your large language model? It turns out that feeding information to a large language model happens to a vector database.
[00:09:19] There are many popular tools out there, and everybody uses them to add new information, new artifacts to the smarts of the large language model. And Hitachi Ventara has invested in a capability that keeps track of all the instances when a large language model has been fed new information.
[00:09:48] So one can go back in time and answer any questions and have basically explainable AI on the data with which your large language model has been taught. And we've also learned a lot around black box AI recently, how it offers results without offering any insight into how it arrived at a conclusion.
[00:10:15] And now I think explainable AI is no longer just a technical challenge. It's now a business necessity. So where do you see the most urgent demand for transparency? And what are the consequences for organizations that ignore this shift? Because I think it's incredibly important and increasingly important, and quite rightly so, right?
[00:10:34] I think AI is being adopted rapidly in some very regulated industries like finance, like healthcare. It's adopted by governments. And I would say many governments, as well as some of these large enterprises, they view AI not as good to have.
[00:11:01] But a country or a region, a province may think of AI as something that's business critical or of national security importance. So there is the need to use AI.
[00:11:18] There is a need to use AI in a way that is compliant with existing regulations or new regulations, especially in these regulated industries where there may be a need to preserve the data, for example, that has been used to fine-tune a large language model for a long period of time.
[00:11:46] Or you may have some GDPR concerns, and you may use some agentic AI to prevent data that should not cross the boundary of a country or province into another. And that data may be prevented or allowed in the training of a large language model.
[00:12:10] So there are a variety of existing and emerging governance regulations that one has to deal with as they train and fine-tune their large language models. And I think with the growing need to unify control across hybrid environments, it's becoming more important too.
[00:12:37] How does Hitachi Vantara, how do you approach the challenge of managing data across both on-prem and cloud infrastructure in a way that supports secure and compliant AI? Because I would imagine that is a huge checkbox for organizations right now. Managing data across the traditional data center and the public cloud is a big pain point for many organizations.
[00:13:02] I would say there are issues around compliance on whether use of public cloud it's secure. There are issues of silos that have to do with the inability to move the data due to costs.
[00:13:19] There's also a technical challenge of, well, if I'm going to do analytics on data sets that perhaps live in different realms, cloud and the traditional data center, sometimes that may be very inefficient. Do I have to make a copy of that data? Do I have to move the data? How do I access the data? Can I do it in real time?
[00:13:44] So one of the things where we're innovating, and I've seen others in the industry invest in, is the ability to do analytics on metadata, to look for patterns, to look for a way to correlate information without having to move the data from one silo to the other.
[00:14:10] Now, ideally, you have one namespace for all the data in the world, but that's not practical. That's not realistic. And in some cases, it may be against certain enterprise or country regulations. You still want to have a way to access the data in a simple, in a scalable, in a cost-effective way. You also want to do it in compliance.
[00:14:37] You always want to have an explainable way for how you manage your data, how do you access your data. And I think we should also mention SP1 by Hitachi Vantara. It's got a tagline of, one data platform, no limits, and has been positioned in a way to simplify data complexity across formats like block, file, and object.
[00:15:02] But for people hearing about it for the first time, tell me a little bit more about it and how you see this kind of unified architecture supporting AI workflows and governance goals. So, what we hear from customers is that they want operational simplicity for managing their data and their applications. And, you know, many of them recognize their data may be held in structured and unstructured data, you know, formats. And they're looking for predictability.
[00:15:33] They're looking for resilience for their infrastructure. They're looking for security. They're also looking for that operational simplicity that I mentioned earlier. So, VSP1 is an architecture and a vision to consolidate and simplify both the data plane and the control plane.
[00:15:53] The data plane, in the end, we're very proud to have our block storage behind every type of storage that we provide through any type of interface, you know, whether it's file, you know, block or object.
[00:16:11] And at the control, you know, point, we have VSP360, which is a simple platform that enables one, you know, to provision, to do analytics. It supports data governance, you know, services. And it's available to be consumed in the traditional data center as a SaaS application and even on one's phone.
[00:16:39] And if we were to look a little bit further ahead, we're already only months away from 2026. Is there any advice that you'd offer to technology leaders or business leaders who want to stay ahead of the curve on things like AI governance, but without slowing down innovation or agility? Because it's a notoriously difficult balance to get right. But any advice on that? I think experimentation is key.
[00:17:05] This is a rapidly moving realm of AI, use of AI in the enterprise, either embed AI and machine learning, agentic AI in your products or leverage AI to improve that productivity. I would say everybody has to invest hands-on in learning how to use AI. Don't just read the, you know, the white papers.
[00:17:32] Sponsor projects within the enterprise that experiment with use of AI, either, again, embedding AI or using AI to power your enterprise. The agility, it's key because without that, you're going to be left behind. This is not the type of project where you're going to spend six months planning and writing documentation before you dive in.
[00:17:55] I think this is the type of project where an agile DevOps type of team, you know, comes in and learns and builds on that learning and then shares that learning within the enterprise. I think a lot of times people feel that getting started with a new technology is very easy in the public cloud. And that's true.
[00:18:22] But many of the key technology vendors have learned from that operational simplicity and agility of the public cloud. And now they're making those type of capabilities available also in the traditional data center, in the modern data center. Well, thank you so much for joining me today. I think the corporate AI dilemma of building trust through governance is something that will resonate with so many business leaders in enterprises of all sizes around the world.
[00:18:50] But before I let you go, I'm going to ask you to leave one final gift to complement your insights. And that is a book that means something to you or has inspired you that we can add to our Amazon wishlist that you'd recommend that they check out. What would that book be and why? So the book, it's called Drive by Daniel Pink. And that's a book that helped me understand human behavior and what drives, what motivates people.
[00:19:16] So at the end of the day, being in tech, you're really in the business of leading some of the world's smartest people. It's a privilege. It's an opportunity. And understanding what drives people, it's really important.
[00:19:32] And that book brings a lot of insights, bring a lot clarity on how to articulate the need to lead and manage people in a kind of knowledge economy in the 21st century. Awesome choice. I will get that added straight to our Amazon wishlist.
[00:19:55] And for people listening, maybe they want to learn more about VSP1 or even keep up to speed with the announcements and the news and the blogs, et cetera, coming out of Hitachi Vantara. Where would you like to point everyone? I would like to point our listeners to hitachiventara.com. We have a series of blogs and demonstrations of our technology. That's a good place to start about our solutions.
[00:20:21] Our passion for customers, our thirst to continue to innovate and how we're looking to change the industry with data. Well, I, for one, have learned so much today around why major enterprises are bringing LLMs in-house rather than relying on public options. And also that growing importance of AI governance frameworks that ultimately enable traceability, auditability and transparency.
[00:20:50] So I will add links to everything there. I encourage everybody listening to go check that out and let me know what they thought of our conversation today. But Octavian, just thank you for starting this conversation. I really appreciate your time today. Neil, a pleasure. Thank you so much. Speaking with Octavian today, I think it really brought home the fact that AI governance isn't just about a compliance checkbox. It's much more than that. It's actually the foundation for trust.
[00:21:18] Trust in a future where decisions are increasingly shaped by algorithms. And I think that's something we all see and feel every day, whether we are at home or in the office. So one of the big takeaways for me is without accountability, without explainability, adoption will hit a wall. So VSP1's approach to unifying data management across hybrid environments, though, feels like a strong example of how infrastructure and governance, how they both need to evolve together.
[00:21:48] And for leaders listening, the challenge is maybe finding that balance between control and agility. Building the right safeguards without slowing the pace of innovation. Maybe for you, the takeaway is that governance is not the enemy of speed, but it could be the enabler of scale. What are your thoughts on that? Did today's conversation give you a fresh perspective?
[00:22:14] If it did, please share this episode with someone in your network who might be thinking about AI adoption. Because after all, the debate around AI trust is only going to grow louder. But the real question is, how is your organisation going to answer it? There's a lot to think about there. So I'm going to leave that to marinate until tomorrow's episode. Let me know your thoughts as always. And I'll speak with you all again tomorrow.
[00:22:40] Remember, techtalksnetwork.com to find out more information and even leave me an audio message. But that's it for today. Thanks for listening. Bye for now.


