VMware’s software-led approach supports multiple hardware consumption options: from mild to wild — you’re not limited to someone else’s idea of an appliance if that’s not your thing.
And I’m really enjoying seeing the creativity from our hardware partners in coming up with clever and unique configurations. The more the better!
At EMC World, the Intel team certainly raised the bar on impressive off-the-shelf Virtual SAN configuration — a 32-node all-flash NVMe-capable VSAN configuration that delivers both outrageous performance and substantial capacity in a slick single-rack footprint.
Better yet, they splurged for some very cool custom bezel graphics. After all, it’s all about how your equipment looks, right?
I tweeted out a picture and a brief description from the show floor. My twitter gang went so crazy commenting and retweeting, I thought I should loop back and share more detail.
In particular, I wanted to interview John Hubbard — the cool cat at Intel that put this impressive rig together in very short order.
John, tell a bit about yourself and what you do …
I’m a network engineer that was asked to test SSDs. My first impression was that it sounded boring, but it turned out to be the best career choice I’ve made. I’ve done network engineering work at Kaiser and other places, and then this spot opened up at Intel on the SSD side.
Having a solid network engineering background is very helpful when working with SSD designs, as it turns out. As flash technology moves closer to the bus and CPU, it’s often the network design that becomes more important.
How did this all come about?
VMware asked Intel about 6 weeks ahead of EMC World if we would be interested in building a cool all-flash VSAN configuration.
We thought that was a great idea to showcase our technology. But six weeks!! A big shout-out to our Intel team who made all this happen. It couldn’t have been done without them, so a big thanks!!
To meet the timeframe, I managed the project end-to-end — I knew there would be a lot of helpful opinions, but we didn’t have much time to debate and discuss. We ended up with a quick design, review and approval in record time.
Other than getting the hardware delivered, what was the hardest part of setting this up?
Most of it was pretty straightforward. Automation was key, as we’re talking 32 nodes here. I used a lot of PowerCLI across 32 nodes which saved a lot of time: to clone VMs, configure static IPs, and minimized all the grunt work.
Let’s run through the configuration details for everyone …
Sure. It starts with the Intel ® S2600TT Server Board in a 1U package.
We have 36 cores per node, 2 processors each, in this configuration. We’ve configured 128GB of RAM per node, more is possible.
Each node uses the Intel Solid-State Drive (SSD) Data Center P3700 Series NVMe for write-intensive caching, and the new Intel SSD DC S3510 Series 1.6TB drive for persistent storage.
We've set up two disk groups in each node: one caching and two capacity drives. It's a simple, straightforward design.
Everything is connected with dual 10Gbase-T.
I should point out that – from a network perspective -- 10Gb is plentiful if your payload is small, e.g. small block sizes like the 4K we tested for here. Larger block sizes, I’d consider 40Gb — especially as the cluster gets bigger.
And, of course, we used the standard VSAN 6.0 distribution, which now supports all-flash configurations such as this one.
You found some time to do some initial performance runs, what did you find?
We spun up 3200 VMs, each running IOmeter against a single 7.5 Gb VMDK – nothing fancy.
Out-of-the box, we were getting 3.25 million IOPS random 4K reads. Not bad. No tuning or optimization required.
Better yet, when we switched to 70r/30w 4K random mixes, we were seeing ~2 million IOPS. Pretty impressive performance and capacity in a very dense package.
It’s early days for NVMe, are we fully exploiting it in this configuration?
Intel and VMware have certified for ESX, and we are in the process of certifying for VSAN.
But the current hardware configuration is ready when it becomes available, and it’s plenty fast in its present state.
Performance should bump considerably as a result when the new virtual adapter becomes available. I’m looking forward to testing that when it’s ready.
Is all of this hardware generally available today?
Yes – all the hardware is available today. The Intel DC S3510 enterprise capacity tier drive just launched, and that’s what we’re using in this configuration for persistent storage.
The Intel DC P3700 we use for caching has been out for a while. The boot drive is an Intel DC S3710 and can double for host swap if needed.
What’s the deal behind the cool bezel LED graphics?
I know a guy who likes doing this stuff. His company, PCJunkieMods, did a great job for us.
In addition to the great stencil work, the LEDs can be made to flash, change colors, etc. Big fun.
Just the thing for your next data center tour.
I know we’ll be seeing you and your team at VMworld — what’s the plan?
64 nodes, baby! Our session has been submitted, and it’s titled “Will You Still Love Me When I’m 64?”
Seriously, though, we’re seeing a ton of interest in bigger VSAN all-flash configurations and we want to show people what Intel and VMware can do together.
Any final comments?
It’s been pretty amazing. This project has generated the most buzz and foot traffic than anything I’ve ever been involved with.
People love the idea of all-flash hyperconverged clusters powered by Intel and VMware technology. It’s been a blast.
And it’s nice to see all those drive bays finally get used …
Like this post? Why not subscribe via email?