Ultra Fast SAN Storage Solutions Designed for Broadcast and Post Production

How do post-production studios cope when their storage systems can’t keep up with file sizes, and their budgets can’t stretch any further? That question sat at the center of an IT Press Tour session with DDP Dynamic Drive Pool by ArdisTech in Amsterdam, where CEO Jan de Wit and team walked through a particular bet on the future of media storage.

DDP is a small outfit based in Arnhem. They only build shared storage for media and entertainment, and they have developed their own SAN file system to achieve this. On paper, that sounds niche. In practice, it solves some ancient pain points in the post.

Jan started with a reminder about the culture of this industry. Many engineers in broadcast grew up on cameras and timelines, not on kernel flags and CLI.

“Every installation we do, there are always network problems,” he told us with a smile. The cause varies. Mac-heavy edit suites inside Windows-first enterprises. Mixed teams where IT pros speak one language and colorists talk another. Toolchains full of video specifics such as SMPTE ST 2110, UDP flows, and the ever-present need to keep Audio and video perfectly in sync across IP.

Against that backdrop, DDP has stayed focused. They attend the IBC every year. They sell through a handful of distributors and dealers. They do not try to be a MAM vendor. They do not chase cloud storage. They build shared storage that editors, sound teams, and finishing suites can mount, hammer with streams, and trust.

A file system built for editors, not for general IT

DDP’s central decision was to create its own SAN file system called AVFS. That was not a weekend project. It meant choosing block I/O, building high availability, and adding media-first features such as Avid-style project sharing and bin locking. Jan compared AVFS to Quantum Stornext to give people a mental picture. It is a single file system that spans multiple raw data containers. One namespace. One set of metadata. One set of rules for access and locking.

This differs from a typical NAS stack, which combines multiple underlying file systems into a single view. In DDP’s world, the storage boxes are raw capacity, and AVFS sits beside them as the brains. That separation allows them to move files between SSD and HDD groups without disrupting workflows. The editor mounts a folder volume. The path stays stable. Under the hood, placement can change.

There is a second design choice that matters significantly in real-world projects—folder volumes. In AVFS, an administrator can give any folder volume properties so it mounts on macOS, Windows, or Linux as if it were its own volume. Access control is folder-based. It sounds small, but it means you can mirror how productions actually work. Teams can grant an assistant access to Audio only. Give the finishing team Audio and Video. Change those permissions instantly when production ramps up.

"We wanted something our customers can operate without bringing in an IT operator for every change.”

The team’s aim is not to remove complexity from storage in general. It is to remove complexity from the everyday tasks that matter to editors and mixers.

Anyone who has shared spinning disks across a busy facility knows the story. Huge capacities are cheap. Sustained throughput looks fine in a spec sheet. Then the real project arrives. Lots of small audio files. A directory of proxies. Timelines full of shortcuts. The workload becomes a random I/O storm, and the array spends its life seeking. Your headline bandwidth evaporates.

Jan brought back the old performance tables he used to share with customers. On small files, “the bandwidth of some 50 hard disks would be some 100 megabytes per second.” Switch to bigger DNxHD files and you suddenly see gigabytes per second. The media changes. The behavior of the same disks changes with it. Try explaining that to a producer looking at a calendar.

That history lesson is why DDP built project caching. The idea is simple. Provide operators with a real SSD tier that includes volume properties. Let them decide where a project begins and when it moves forward. Keep the logic file and folder aware rather than block-only. The goal is wire-speed performance on the working set and deep capacity for everything else, all within one file system.

Project caching in plain language

In AVFS, each folder volume can be associated with specific data locations. A data location is a LUN backed by SSD or by spinning disks. When a production team creates the folder tree for a new show, they can set the cache method on those folders and point them at SSD first or HDD first.

Two things stand out. First, DDP designed AVFS so that a single file entry can reference two physical paths simultaneously. That lets the system copy data to the cache without breaking the existing reference. A clip can start life on an HDD, be promoted to an SSD, and the mount point never changes for the editor. Second, cache operations are project-aware. You can ingest rushes into SSD for the week’s work, push them down to HDD in the background, and clear the cache instantly because the HDD copy is already in place.

Pascal Collard described a customer scanning old film reels. One day of work can generate tens of terabytes of data. They write at full speed to an SSD box during the day, and then the cache engine drains to the HDD in the background, so the capacity is ready for the next morning. For audio teams that live on small files, DDP also ships all-SSD systems. For long-form and episodic video, the hybrid still wins in terms of value.

SAN first, NAS when it makes sense

DDP has always been a SAN company. They use iSCSI for high-bandwidth data and expose SMB or NFS when a push–pull workflow needs it. On higher-end builds, they support NVMe over Fabrics with RDMA and Fibre Channel. The flagship DDP 10AF accommodates ten NVMe SSDs in RAID-5, boasting headline numbers that satisfy uncompressed 8K pipelines. Jan put it plainly.

“It has a bandwidth of 40 gigabytes per second.” Linux and Windows clients can hit double-digit gigabytes per second each over RDMA. Even Macs see multi-gigabyte reads.

The more common boxes mix SSD and HDD. A typical 24-bay chassis might have eight SSDs in RAID 5 and sixteen large HDDs in RAID 6. AVFS treats each RAID set as a data location. There is a practical limit of 96 terabytes per LUN today. To get to petabytes, you add more LUNs. AVFS alternates files across those LUNs when you choose a balanced placement policy, but a single file stays within a single LUN. That design keeps expansion predictable and recovery boundaries clear.

Pricing spans a wide range. Small, portable systems often appear at festivals and award shows. The high-end NVMe model is a serious investment. Most customers sit in the middle with hybrid arrays sized for the way they actually cut and mix.

Why do they refuse to build a MAM?

A lot of media storage vendors tried to stay “unique” by building their own media asset manager. It made sense when cloud access and AI tagging were optional. It looks heavier now. Those MAM teams must continually add cloud connectors and new AI services to stay relevant. DDP never went that route.

Jan’s rationale was straightforward. Storage is a backend service. Let customers choose the MAM they already like. Keep AVFS fast, simple, and open. Focus on high performance and project-level controls. It also keeps support lean. “We know everything. We know all that’s going on,” he said. There is no third-party file system in the critical path, and no giant application tier sitting on top that DDP must maintain.

The result is a storage platform that plays well with others. Avid Media Composer, Resolve, and Premiere Pro are the prominent video editing software options. AVFS supports Avid-style project sharing and bin locking, which still matters for many broadcasters. For backup and archive, DDP often pairs with Archiware on a separate server. Netgear and tape show up frequently on the back end. Customers can mix and match without waiting for a monolithic stack to catch up.

Jan did not dodge the conversation about the cloud. Some customers want hybrid setups. Some want to move media offsite during a project. DDP will not stop them. However, they want buyers to think more critically about risk and responsibility. If a post house takes on a Netflix or Disney show, that post house remains responsible for security even if they park the files in someone else’s data center. Offsite does not move the accountability.

There is also the day-to-day risk of remote work. Are home machines patched? Are users separating internet access from the production network? Are transfers being scanned upon arrival and audited upon departure? Jan’s view was simple.

On-premises with air-gapped transfer rooms remains the lowest-risk path for many teams. “Otherwise, you spend your time as a security officer,” he said, not a production engineer.

None of this is an argument against cloud in general. It serves as a reminder that file-based work has tight operational requirements that do not disappear when you change the location of your disks.

Culture, scale, and staying in your lane

It is easy to forget how small DDP is. Six employees on staff. Profitable since the start. Exhibiting at IBC since 2009. Distribution in the US, Asia, and the Middle East through partners, and direct coverage in Europe. Jan and CTO Bart Jansen own the company. “The only obligation we have is to our customers,” Jan said. That independence explains some of their choices. No MAM. No cloud product. No detours into ingest or playout boxes, even when partners asked.

It also explains the pace. AVFS continues to receive new features, including SMB improvements and high-availability enhancements. The product line stays compact, so support stays personal. When customers file a ticket, they expect a reply now, not in 48 hours. In media, broadcast days and live schedules do not pause for a maintenance window. DDP set their business up to match that reality.

There is a humility to the way Jan speaks about rivals. He credits Avid Nexus for its staying power in broadcast. He calls out brands that moved to NAS to widen their market. He also emphasizes the technical difference that SAN still offers for extensive file work. Block I/O, one file system, wire-speed caching, and a folder model that mirrors how productions run.

What this means for IT and media leaders

If you work in enterprise IT, you might be thinking that all this sounds specialized. It is. But it carries a broader lesson. The old argument about SAN versus NAS was never just about protocols. It was about workflows. In post, where a minute of 4K uncompressed is not uncommon, block I/O and deterministic caching can turn chaos into a calm timeline. In Audio, where you have oceans of tiny files, all-SSD boxes keep sessions snappy without forcing you into cloud costs you cannot justify.

There is also a leadership lesson. DDP’s refusal to grow sideways into every adjacent category looks boring until the crisis hits. Then, a single-purpose file system with a simple control plane and a clear support path appears very attractive. The team also speaks plainly about what they do not do. That clarity helps partners build the rest of the stack around them without surprises.

Project caching is the thread that ties it all together. It gives editors the feel of an all-flash system on the parts of the project that matter, while keeping petabytes of rushes and versions on cheaper disks. It also provides operations teams with a predictable way to move media without disrupting paths. No magic. Just sound engineering aimed at the actual pain points of post.

My takeaway

DDP is not trying to win every RFP. They want facilities that still prioritize SAN performance and project-aware control. They want the houses that think long and hard about risk, because their clients expect it. They want teams that see storage as a tool for getting shows out the door, not just another box on a spec sheet.

Jan’s closing thought stuck with me. When people move too fast toward trends, they can end up swapping their craft for paperwork.

“You’re not a production engineer anymore. You’re a security officer.” That line is not a scare tactic. It serves as a reminder that our choices often push hidden work onto our teams. Good storage should eliminate hidden work, not add to it.

If you work in media technology and you are wrestling with performance on real projects, DDP’s view of the world is worth a look. Not because it is flashy, but because it respects how editors, mixers, and engineers actually work.

Over to you

I’ll be interviewing DDP on the podcast. What should I ask him? Do you want me to press on AVFS internals, the folder-volume model, real numbers for project caching in mixed SSD and HDD setups, or the on-prem stance and how customers are balancing risk? Send your questions, and I’ll take them straight to him on the show.