If you’re a server manufacturer, you’re playing a very demanding game.
There’s a never-ending parade of new technologies you’ve got to rapidly adopt. Customers expect both flawless execution as well as steadily declining prices. And you’ve got some pretty aggressive competitors as well :)
You’re also paying attention to shifting workload demands from your customers.
As I track the various server vendor announcements, I couldn’t help but notice a pronounced shift towards designs that are obviously intended to be used as part of a storage or database farm.
Looks like the pendulum is swinging — again.
Differentiating Terms
Here, we’re discussing software-based storage (vs. software-defined storage). The distinction is important: software-based storage is anything that doesn’t prefer specialized external storage hardware, e.g. designed to run on industry-standard servers.
In this category should be software-based storage products (VSAN, Nexenta, ScaleIO, Scality, Maxta, etc.) as well as HDFS (basically a storage system for big data) as well as the newer databases, etc.
Any software that stores and retrieves data, provides persistence and protection, etc. would be part of this discussion.
Indeed, even IDC -- the scorekeeper of the storage biz -- has introduced a new tracking category: software-defined storage platform, or SDS-P. While this IDC-named bucket is only responsible for a small portion of the market today, it's growing rapidly.
An obvious quibble: IDC's taxonomy is imprecise -- they are counting any storage product that is software-based, regardless of whether it supports APIs, can dynamically compose services, etc. Their new category would be far better named as "software-based storage platforms". A smaller quibble: NFS is included, HDFS is not? But it's a start ...
Regardless, it looks like server vendors are seeing the demand for storage-optimized servers start to grow.
Where We Came From
For the last two decades or so, most server designs seemed to assume that the majority of storage would be external: SAN or NAS.
You’d see very little enclosure room for disk devices. Limited power and cooling for all those spindles. IO controllers and HBAs were presumed to live on a PCIe bus, and you’d need several — so not much in the way of onboard IO capabilities.
Perhaps the zenith of that philosophy was the popularity of blade servers — a form-factor optimized for external storage. Independently scale compute/memory and storage — what could be slicker?
And then the world changed.
Things Are Moving
When Intel announces a new processor (such as the new Xeon E5-2600 v3, code-name “Grantley” — this one with 18 cores per socket), it drives a refresh cycle in the server vendor community. Not to mention DDR4 and 12Gb SAS now being table-stakes.
To take a quick look at some of the recent server product announcements, it’s pretty clear there’s a new design point being considered.
A prime example: the new Dell R730xd. In a slim 2U, there’s room for up to 24 2.5” drives. Look deeper, and you’ll see even more storage-centric features. Two dedicated slots for 12Gb SAS controllers. Support for the new NVMe standard for up to four PCIe flash devices.
Someone had software-based storage in mind.
A similar picture can be seen from the new IBM x3650m5: 24 drives, two dedicated IO controller slots, etc. The same picture can be seen with the new Lenovo RD650, as well as the HP Gen9 DL380. And a few others as well.
Let’s do the math.
Using 24 1.2 TB 10K Seagate 2.5” drives, you’d get ~29TB raw, ~14TB protected (using two copies of data) behind a two-socket server. That’s very reasonable for many use cases. Yes, those are pricey drives (today), but that’s not going to last for long.
If I were to consider VSAN, an 8-node typical server/storage cluster would support over 100TB of protected, great-performing storage. Much more than enough for most use cases. All in 16U of rack space.
Gives you a sense of where things are going, doesn’t it?
What About Blades?
More than a few IT shops have committed to blade architectures for good reason. At a certain scale, there’s an attractiveness to being able to scale compute/memory and external storage independently.
But given (a) the rising popularity of software-based storage solutions, and (b) the attractive density and economics in some of the newer server designs, the economic pivot points are starting to show signs of change.
If you’ve made a big commitment to blades, maybe it’s worth your time to re-run the numbers.
What To Expect In The Future
It’s pretty simple: expect more server designs that are optimized around running software-based storage workloads.
Clever server vendors will figure out how to cram even more small drives into 2U and 4U enclosures -- and manage power, cooling, vibration, etc.
Storage IO controllers will get simpler, more powerful, and start to lose unneeded features like on-board cache and RAID. They won't mask key drive and bus status information. Maybe, just maybe, they'll end up on the motherboard.
Flash devices will come off the PCIe bus, and end up on the motherboard as well. Hopefully they will learn to report errors in a standard way. And while we're pining away for standards, maybe we could get to some sort of standard with enclosure management? One can hope :)
Software is eating the world, and storage is no exception.
----------------
Like this post? Why not subscribe via email?
You've missed the most obvious candidate for SBS; HP's SL4500s; once the Gen 9 variant appears, it becomes a very interesting host for extreme scale-out. It does amuse me that the most obvious host for ScaleIO is an HP server. Okay, it's not performance but the SL4500 has some very interesting configurations.
.
I really think that the sun is setting on the array; it'll never quite set but it'll never keep us warm again
Posted by: Martin Glassborow | September 17, 2014 at 01:39 PM
good article, it was great to be close to the server
Posted by: pirostore | October 30, 2014 at 01:01 AM