Lucidity’s No-Touch Approach to Cloud Storage Efficiency

How much time does your team spend thinking about cloud storage? Probably less than it should. And that is precisely the gap Lucidity wants to close, not by adding more dashboards or layers, but by removing effort entirely.

At the IT Press Tour #62, I attended a session where Lucidity outlined how they are simplifying one of the most frustrating aspects of public cloud infrastructure. This wasn’t a speculative pitch or some distant roadmap. What they showed is already live, in production, and used by major enterprises.

Lucidity isn’t reinventing storage. They are removing friction from it.

A Quiet Shift Toward Storage That Works

Lucidity’s primary focus is block storage in the cloud. Not object storage, not backup, but the basic storage attached to virtual machines. The kind that powers databases, search engines, and most core applications.

The problem they are solving is persistent. Volumes are usually over-provisioned because no one wants to guess low. Once provisioned, they stay static even as workloads shrink or shift. Most infrastructure teams do not touch them again until something breaks.

Lucidity automates that entire loop. Their AutoScaler monitors disk usage, increases capacity as needed, and safely reduces it when there is excess capacity. Customers don’t need to predict usage patterns. They don’t even need to check the system. As Raj Dutt, their VP of Marketing, put it during the session, “Once we’re installed, most customers don’t come back to the dashboard. They don’t have to.”

Shrinking Without Downtime Used to Be Impossible

One of the standout features of AutoScaler is the ability to shrink volumes. Not just stop writing to them or archive them, but return unused space to the cloud provider in real-time.

This is no small feat. Most cloud platforms allow you to expand a disk, but they do not offer any way to reduce it after the fact. Even if your application uses only 200 gigabytes on a 1-terabyte volume, you are still billed for the full amount.

Lucidity works around this limitation by integrating the file system and API layers. On Linux, it uses Btrfs to perform safe reductions. On Windows, it works with NTFS without requiring any changes.

The entire process is invisible to applications and does not interrupt I/O operations. That part matters. During the live demo, utilization climbed to nearly 80 percent without intervention. Everything ran as expected. No performance drops. No maintenance windows.

Lumen Adds Another Layer of Intelligence

Lucidity also previewed a new feature called Lumen. While AutoScaler handles capacity, Lumen addresses disk tiering. Most cloud platforms offer multiple tiers of storage, each with different performance profiles and costs. Choosing the right tier is often a matter of guesswork.

Teams default to premium tiers to avoid slowdowns, then forget to review usage later. As a result, many organizations pay for higher performance than they need.

Lumen tracks disk behavior over time, flags workloads that could be moved to a more appropriate tier, and allows those changes to occur instantly. There is no reboot, no delay, and no disruption.

You can see side-by-side comparisons of your current tier versus the recommended option. It also shows historical trends, IOPS usage, and potential savings. With one click, a storage admin can make the change. A simple decision backed by data replaces the complexity of choosing between six-tier options.

Licensing and Flexibility

Lucidity’s pricing model is based on actual usage, not allocated volume. If you store 400 gigabytes on a 2-terabyte volume, you are billed for the 400, not the 2,000. The current rate is sixty dollars per terabyte per month. That figure includes support, automation, and any savings you realize.

There are also flexible deployment models. For organizations with stringent compliance requirements, Lucidity offers private link integrations and self-hosted installations. That was of particular interest to several attendees from Europe, where local hosting and data residency remain top priorities.

Lucidity supports AWS, Azure, and Google Cloud. Other platforms are not yet available, but the team made clear that expansion is possible without significant reengineering.

A Tool Built to Stay Out of the Way

Lucidity’s approach is striking because it assumes users don’t want to think about storage. They do not want to babysit volumes or reconfigure tiers. They want capacity to be there when needed and costs to make sense when they aren’t.

This is not a flashy platform. It is not chasing trends. It addresses fundamental, overlooked inefficiencies with a practical mindset and engineering discipline.

AutoScaler handles storage sizing behind the scenes. Lumen provides visibility into performance and costs with clear, simple next steps. Together, they reduce manual effort, lower bills, and keep teams focused on higher-value work.

Looking Ahead

The questions that followed the session weren’t about what the product does. They were about where it’s going next. Can it support Oracle Cloud? Can it integrate with local providers in regions like Germany or France? Will it expand into other types of storage?

Lucidity is aware of these expectations. For now, their focus is on refining what works, maintaining reliability, and scaling up through channel partners and marketplace listings.

What stood out to me wasn’t just the product. It was the clarity of purpose. Storage should not be a source of surprise or overhead. Lucidity’s team is building around that idea.

If you manage infrastructure and find yourself constantly firefighting storage alerts or trying to predict growth, this kind of automation could be a welcome relief.

Is your team ready to hand off storage management to software that quietly handles it for you?

I will be speaking with the team at Lucidity on the Tech Talks Daily Podcast in the next few weeks. If you have any questions you would like me to ask, please let me know, and you can also be a part of the conversation.