CFD11 – Day 1 – Storage is still hard

And Kubernetes makes it that much more-so. It’s such a difficult challenge that some vendors path to success involves focusing on very specific use cases and trying to be the best at those, leaving the remainder of the overall storage ‘problem” to other vendors.

After all, if you try to do everything and fail, you are going to be laughed at. Just look at those crusty old storage arrays with bolt-on CSI drivers. Those are cloudy, right? So, I can’t blame some vendors for their desire to specialize. You won’t please everyone, but you may end up with a few solid wins.

MinIO, who presented on 6/23 in the afternoon timeslot of Cloud Field Day 11, is solidly in this camp. They take a focused, software defined approach and rely on a distributed, containerized architecture to aggregate local units of storage into a centralized pool for shared consumption.

Conceptually, this is similar to how virtual storage appliances consume local host storage and re-present to a virtualization cluster for consumption in the HCI world, except the lowest unit of distributed compute is now a container, and the pooled storage is presented using an object storage API.

An overview of the software architecture is shown below (view from within a single node)

Cloud Native

Now we can zoom out and see how each logical storage node (containerized itself on K8s) presents its local storage to the network as part of a shared object pool:

Because the hardware and OS layers have been abstracted away, this lends itself to deployment across a variety of heterogeneous environments. This can be especially helpful for customers in hybrid scenarios that are looking to maintain a consistent Kubernetes storage layer across private and public cloud environments.

One such use case would be an Enterprise customer with an on-premises Tanzu deployment and another off-premises in VMC on AWS. Or, alternatively, an on-premises OpenShift deployment with another residing in AWS EC2. The same object storage layer, MinIO, could be maintained in both cases, lending itself to operational efficiencies vs. utilizing siloed storage solutions on both sides of the hybrid architecture.

As an architect, I appreciate being able to achieve simplicity and consistency, when possible. Even though there are ways to manage different Kubernetes distributions in a uniform manner, it can be useful just to use the same technology everywhere. I think the same logic can be applied to Kubernetes storage.

However, as I mentioned in the live session, if my customer requires more than object storage, I am left with additional research to do in order to optimally meet those requirements. MinIO does not attempt to address block and file needs. They can provide whatever storage type you need – as long as it’s object.

Anyway, one aspect of Cloud Field Day 11 that I was most looking forward to, coming in, were the anticipated discussions around persistent container storage. Aside from being generally interesting from a technical perspective, this is an area that I now touch on a daily basis, so I feel I have a bit more to contribute this go-round, compared to Field Day’s past.

Looking forward to hearing more on this front, especially with respect to the recovery of K8S data in disaster scenarios, in the days ahead.

Be sure to check out the livestream of Cloud Field Day 11, resuming Thursday 6/24 at 8AM PST at https://techfieldday.com/event/cfd11/ and follow us using #TFD11 on Twitter.

One thought on “CFD11 – Day 1 – Storage is still hard

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s