« Private Clouds Vs. "aaS" Models | Main | Why I Really Like VMware's SpringSource Acquisition »

August 13, 2009

Comments

Duncan

I think Chad posted the Cisco version of this document? http://virtualgeek.typepad.com/virtual_geek/2009/06/a-great-new-vmware-viewcisco-ucsv-max-whitepaper.html

Anyway, definitely worth reading!

Chuck Hollis

You're right, it looks like a similar version of the same work.

Thanks!

Alex McDonald

Hi Chuck

Just a few points that I can't work out (the devil being in the details as it were).

Page 3: "A 1,280 desktop deployment anticipated to execute a workload similar to that generated by a knowledge worker can be easily handled by approximately 100, 15,000 RPM physical disks configured with RAID-5 protection. This configuration is expected to have enough resources to support the additional workload generated by unplanned events."

However, on page 32 it talks about 240 disks: "The Symmetrix V-Max storage array used for the testing had two storage engines with 64 GB of cache and 240, 300 GB, 15,000 RPM disk."

And, the most Desktop VMs tested was 640, but on page 56 it says: “The average utilization of the physical disks configured to support the VMware View deployment did not exceed 30% during the execution of the steady state workload and 50% during the boot storm event referred to earlier”

So 640 desktop VMs drove 240 15K RPM disks to around 50% during boot, and around 30% while they were in a "steady state" (whatever that means).

All other things being equal, 1280 desktop VMs would drive 240 15k disks to around 100% during boot, and around 60% when in a "steady state". At best; there are probably other bottlenecks that would come into play here.

Back to page 3. How does that square with a "1,280 desktop deployment ... easily handled by approximately 100, 15,000 RPM physical disks ... expected to have enough resources to support the additional workload generated by unplanned events."?

It's 140 disks short by my calculations.

The Cisco version of this says the same thing.

marc farley

Chuck, here is an earlier version of the same thing - but with different vendors (3PAR, HP and VMware) published by ESG labs last Novemer: http://www.3par.com/SiteObjects/9EEF31504EA0359718E02702B4A58EF2/ESG%20Lab%20Validation%203PAR%203cV%20Nov%2008.pdf (give it a few seconds to load)

Nice work, though, keep trying to catch us!

Chuck Hollis

Hi Marc --

Are you seriously attempting to compare the two efforts as "the same thing"? And EMC "trying to catch us"?

You should be better about putting the smiley faces in when you're obviously joking.

-- Chuck

Alex McDonald

Chuck

You're right, it's not the same thing. Yours is inferior, because the it's innumerate.

To repeat; "1,280 desktop deployment ... easily handled by approximately 100, 15,000 RPM physical disks" when 640 desktop VMs drove 240 15K RPM disks to around 50% during boot, and around 30% while they were in a "steady state".

I know math can be hard, but it doesn't make sense to even those of us that can just about handle fractions & percentages.

Chuck Hollis

My, Alex, you're so charming and pleasant to converse with.

-- Chuck

Chad Sakac

Alex, some feedback is valid, some is not (at least IMHO).

I think we can make the section more clear. What makes it a bit tricky is that:

1) There were only 4 UCS half-width blades available at the time of the test - which is why this topped out at 640 clients (4 x 160 clients per blade - which is VERY dense). The point of the effort was not to simply create mass replicas and show space efficiency and boot handling (which both NetApp and EMC have been showing for a year now using blended array/VMware approaches up to thousands of clients), but to try to replicate a full "desktop experience".

2) The workload for VDI use case testing is VERY hard to generate (imagine hundreds of "users" simultaneously opening docs, editing files, sending emails using outlook). Booting the VMs is easy. The client workload is much, much harder. The "steady state" comment you made pejoratively refers to the post-boot, workload being run by the AutoIT workload generator that EMC, Cisco and VMware used for this purpose. It's still not perfect (we didn't simulate the anti-virus and patching periods which are also hard on infrastructure in VDI configurations) - but we're working on it.

3) the question of the number of disks - I think we could have been more clear here, so let me see if I can clarify. The issue is that the document isn't sufficiently clear about WHAT spindles were used out of 240.

- there were 240 disks in the actual V-Max used in the tests, but the logical devices used in the VMware View workloads were virtually provisioned using 128 in the pool (see the heat map in the document). Now, at the beginning of the test, we had no idea how heavily they would be hit through a variety of scenarioes, so it was an educated guess. So - think of this as "wide striping" (in 3PAR lingo for you Mark) across 128 spindles. I can understand why this would seem a bit wierd, as it's hard in NetApp configurations to create very wide layouts (possible, but hard) - usually aggregates (which are the container for flexvols, which are the containers for LUNs) are realtively small (16 or so spindles). In the end, throughout all the tests (up to 640 clients), 128 spindles were relatively lightly utilized, as noted.

- so, 128 spindles easily supported 640 VMs through the various workload tests (50% during mass boot - which can be mitigated in a variety of ways in VMware View Manager, such as configuring boot waves, and "don't shutdown, log-off" in the client config), and 30% while running the AutoIT workload we specified. While mass boot can be mitigated, you still need to plan for the effect of VM HA response on these very high VM density configurations (i.e. if you lose an ESX host, VM HA will reboot those 160 client VMs)

- The VDI workloads tend to be linear and "building block-a-ble", so it's a safe assumption that 100 drives could support roughly double the workload.

Hope that helps, and I will get the feedback for more clarity to the team who is doing the testing/documentation.

Just want to make sure (as one of NetApp's competitive folks) that at least if you're critiquing us, you critique us correctly :-)

Feedback always welcome!

John F.

@Chad

The statement "as it's hard in NetApp configurations to create very wide layouts (possible, but hard) - usually aggregates (which are the container for flexvols, which are the containers for LUNs) are realtively small (16 or so spindles)." is absolutely FUD and shows that you don't understand how Aggregates work.

The RAID group is the basic unit of storage. A RAID group can contain up to 28 spindles. RAID groups are combined into Aggregates. From the Aggregate you create logical containers called Flexvols. A flexvol can contain one or mure LUNs plus any reserves you wish to allocate. Most configurations I work with have 50-100 spindles in an aggregate. When you do a design, the number of spindles you put in an aggregate is generally determines by how much IO you'll need to support. The size of those spindles is typically a funtion of your space requirement. I'm just guessing that you are confusing RAID groups with Aggregates. The default size for a RAID DP RAID group is 16 spindles.

John

Geoff Mitchell

So basically this validates that technology that VMware, Cisco and EMC sells works. Wonderful. Nothing new here, other than Cisco's servers, which are still rarer in the real world than hen's teeth.

Now, talking about that real world, if I were looking to do this for my company, I could deploy Dell servers, redundant pairs of Brocade switches and a back-end Clariion.

Why choose Dell? Cisco servers are much more expensive from an acquisition and ongoing maintenance perspective.
Why choose Brocade? I don't really need redundant directors for eight servers.
Why choose Clariion? The Symmetrix is overkill. I get five 9's from the Clariion and it looks like even with the basic spec of the Symm used, it's bubbling along at less than 10% utilization.

kostadis roussos

Chad,

great set of comments, one thing threw me off though.

you said:

> as it's hard in NetApp configurations to create very wide
> layouts (possible, but hard) - usually aggregates (which
> are the container for flexvols, which are the containers
> for LUNs) are realtively small (16 or so spindles).

Not quite sure why you said the containers were aggregates were relatively small (16 or so spindles).

I suspect you are confusing a raid group with an aggregate. A RAID-DP group is of course defined to be 14+2 disks.

An aggregate can consist of more than one raid group.

cheers,
kostadis

Alex McDonald

@Chuck

Thanks, always a pleasure. I thought you'd forgotten me.

@Chad

Thanks for the detailed response.

On a point of correction (your point about NetApp wide striping). It's easy to build an aggregate with many more than 16 spindles; aggregates are simply containers for RAID groups. The aggregate has a (currently) a 16TB limit, not 16 spindle limit. With your example of 300GB drives you can build an aggregate with approx 56 data drives plus 8 parity, or 64 drives in a single wide striped aggregate.

I see your point about the 128 drives you assigned to the workload for the V-Max; thanks for clarifying. The document might need a bit of tweeking to make that clearer.

As for efficiencies, I'm curious how you might address the following;

(1) These tests are run using RAID-5, using a 5-IOPS/desktop profile, which takes a V-Max 100 (or so)disks. An identical configuration on NetApp 3100 Series at 8-IOPS/desktop and RAID-DP would only require 56 disks. The configuration in the Reference Architecture http://www.vmware.com/resources/techresources/10051 shows this. If we used a comparable # of IOPS (ie. 4), we’d be at 32 disks.

(2) The tests only highlight the storage for OS, but no user data. V-Max doesn't support CIFS, so the VDI-user home directories would need to run on a completely separate array, correct?

nate

What's the list pricing for this solution? Seems kind of a waste to allocate 96GB of memory when you only ever use half of it, even more wasteful to use 8GB dimmms on those systems.

I'm personally pretty excited about the new HP Istanbul blades, with 16 memory slots. 12 cores and 64GB(using 4GB DIMMs) of memory at a very good price. Don't forget the 10GbE virtual connect+ FC virtual connect. And no I'm not sold on FcoE either by the way! 10GbE+4Gb FC is cheaper by a long shot(about 50%) still(no I don't need 8Gb FC).

Myself I'm not sold on the Cisco memory extender stuff, not so much the technology itself but the fact that it's tied to the Xeon 5500 which is currently limited to 2 sockets and 8 cores per system, the memory:core ratio is way outta whack. Sure you can load up on lower capacity dimms but then you start chewing up tons of power, each DIMM is what 4-6 watts? 8GB dimms are still obscenely expensive.

Also as another poster mentioned sounds like you wouldn't need v-MAX to accomplish this, probably a CX4 could do it pretty easily as well.

What would be impressive/interesting to see for me at least is a solution like the one quoted but one that really shows high utilization/efficiency, and of course the price tag! Whether it be dropping the storage specs, or increasing the # of blades/VMs/etc(and reducing per blade memory say by 1/2 w/4GB DIMMS) to drive usage
higher.

No HP blades here yet, just some Dell ones that we got a few months ago, they work OK, but I really was blown away by the HP tech, and the pricing came out cheaper than Dell because the blades had the 16 memory slots and could use 4GB chips vs Dell had(has?) to use 8GB, haven't priced it out in a month or so though, perhaps Dell's stuff has changed since.

Chad Sakac

@ Alex, @ John F: "The statement "as it's hard in NetApp configurations to create very wide layouts (possible, but hard) - usually aggregates (which are the container for flexvols, which are the containers for LUNs) are realtively small (16 or so spindles)." is absolutely FUD and shows that you don't understand how Aggregates work."

My statement was based on the FAS 6030 I was playing with 3 weeks ago (purpose was: understand NetApp better, get hands on experience and try forcing VSA PSA claim policy rules to play with PP/VE and NetApp (a side discussion Vaughn I and I were having). I'm not part of any competitive team, just like to know things, and fortunate that I have a big sandbox.

Admittedly not running the LATEST ONTAP rev, but 7.3.1 (close to most current). I **know** the basic element is a RAID group, and that an aggregate can span many RAID groups. So. In Filerview, how do you do this? You create aggregates, and flexvols in those aggregates, and LUNs in those flexvols - but how do you do RAID groups, or create one of these very large aggregates in the product GUI?

Do you want me to post the camtasia I recorded of this (didn't post it because I'm not in a competitive team like I believe you are)?

The answer is that I could't find any way. You create the Aggregate, and if you select RAID-DP (I'm doing this from memory, so please correct me if I'm wrong materially), the largest aggregate I could select was 28 drives.

Could you manually via the CLI create any layout you want of underlying RAID groups and aggregates on top of them? Sure. That's why I said "NetApp configurations to create very wide layouts (possible, but hard)" **Possible, but hard**. I know that Filerview must be due for a refresh, but that's what I meant. The answer of "it's easy via the CLI" is really "if you're already a NetApp expert, it's easy and fast with the CLI" - that's the nature of CLIs, of course.

When was the last time you logged into a NetApp filer in a VMware configuration, boys?

@Nate, @Geoff - thanks for the comment. Agreed - this ABSOLUTELY could have been done on CX4 or on Celerra at this scale - EASILY. The reason it was on V-Max is that we intent to continue to update at larger and larger scale. When we did the test, we thought we could get more UCS blades, but at the time, a single half-populated chassis was an achievement :-)

These solution efforts don't ever stop, they iterate - constantly. This feedback is useful - we're starting the next round with early views in to the next version of VMware View, and also with other upcoming EMC things. This next round has goals of more proscriptive sizing guidance, high scale, better cost analysis. We're doing it across scales, protocols, platforms.

John F.

@Chad

When you create an aggregate through filerview, you set the number of disks in the last screen of the add aggregate wizard.

Aggregates - add, then fill in the screens.

The last screen (Aggregate - Number of Disks) is where you get the option to set the number of disks.

To add disks to an existing aggregate from the filer console, use the aggr command:

aggr add -n

It's that easy, either through filerview or the console. If you still find this difficult or confusing you may want to try NetApp System Manager, which is even easier.

John

Vaughn

Are you guys serious about this configuration? Seriously?

I mean UCS - awesome!

VMware View w/ Linked Clones - very cool!

A V-Max with 128 300GB FC drives!!!!

You want us to buy 38+ TBs of storage in order to boot 640 VMs consuming 710 GBs of VM data!?!?

I'm not sure if we have completed the QA for vSeries in front of a V-Max, but I'd be happy to find out if it helps sell this config. then you could enable Dedupe and reduce the disks required to a fraction of what is configured here!

Cheers!

Andy

Hi Chad,

I think you're possible talking about the RAID group size setting of the aggregate rather than how many disks you can configure in an aggregate. NetApp's default RAID group size is 16 (14+2) - you can increase that to 28 (26+2), however you can concatenate many RAID groups together to span lots of spindles. If you have a system with many disks, you can create an aggregate as large as you need it up to 16TB through the GUI, but it will comprise more than one RAID group.

Don't feel I've added much to the discussion here, but just felt compelled to clarify.

Alex McDonald

@Chad

Logged in last week to the test filer I have access to over the intertubes. It's not all picking apart competitor's doc here at ShadeOfBlue Towers you know :-) Sometimes I get to play.

What were the drive sizes? The system won't let you add more disks to an aggregate or a RAID group in the aggregate if the resulting size of the data (not parity) exceeds 16TB. Send the camtasia, and I'll get John F (who's in engineering) to take a look at it.

I'm still interested by the V-Max testing and how you handle user data in the VMs. You could put it all in a vmdk, which is what I suspect what EMC did, but that brings limitations, especially with the backup/recovery of the user data, as all of it is encapsulated in a vmdk file which makes recovery cumbersome. Besides, VMware best practices recommends redirecting the user data to NAS (http://www.vmware.com/files/pdf/view3_storage.pdf )?

Chuck Hollis

@vaugh and everyone else

Some people perceive VDI as all about cheap computing and cheap storage, and somehow trying to compete with the worlds cheapest storage, e.g. desktop drives.

Other people look at the problem differently -- interested in driving very high server utilization, provide a better end-user experience for knowledge workers, solve some thorny problems related to security and disaster recovery, having flex for workload spikes, etc.

These people tend to be interested in different things than the issues you raise.

Hint: we wouldn't spend the money and the effort doing this sort of work unless there were named customers in the queue who wanted us to do this sort of testing.

To each their own.

-- Chuck

John F.

oh,

@Chad

Look like some formatting was lost on the console command to add disks to an aggregate. Let's try that again

aggr add N@S

Where N is the number of disks you want to add and S is the size of the disks.


@Alex

I only wish I were in Engineering :-) I'm in Professional Services.

John

Chad Sakac

For what it's worth - I think this is getting silly.

Alex, John F - that's exactly what I was talking about - more than 28 disks wasn't possible in the GUI, and you don't configure RAID groups (you configure the number of disks in the aggregate) I will send you the video, I'm always happy to learn more (even if it's how to do it in the CLI)

We also ALWAYS recommend some user data on redirected (redirection can be done several ways) CIFS (there are exceptions - this BREAKS View Manager checkin/checkout for example - important for mobile users). In EMC land, this is a Celerra, and we're showing excellent dedupe rates for user data on CIFS (40-50%, including public customer experiences).

@Vaughn - come on man :-)

1) customers generally don't buy infrastructure JUST for VDI use cases (particularly the storage part) - whether it's NetApp or EMC, the array is shared for MANY
purposes.

2) I'm sure you're as busy as I am before VMworld, but if you're going to sling, read the docs or read the comment thread before you do that.

a) it was virtually provisioned - a thin stripe across all the 128 spindles were LIGHTLY loaded in all the tests - and we extrapolated out to 100 spindles for 1280 users. Those could be used for other purposes at the same time.
b) My comemnts in the thread are clear - we wanted the initial test to be conservative, and I think we could do even more.
c) there were some cases in the test where there WAS a large performance demand through the backend

3) The fact that more capacity is needed (for performance - and we do note that we could see in the tests some scenarioes that could saturate even the 100 drives) than the user data required. In all the back and forth on the marketing and mud-slinging front, this gets glossed over.

That (as you know) is the core issue (and while everyone focuses on capacity as per Chuck's note), it's a real problem. This DOESN'T manifest in boot storms (can be mitigated, and cache can help), but can be a problem in a wide variety of use cases (AV, patch, outlook offline sync, and general "busy user"). Those 1280 users each have a 5400 RPM (some 7200 RPM) in their desktop/laptop today. Sure, there's duty cycle (they don't all need the performance at once generally), and SURE enterprise-class arrays are not only faster but more efficient (cache/PAM, etc) - but that helps only with some stuff, not all.

To Chuck's hint - I'm working with MANY customers who are at 1000s of desktops, but freaked at how much additional storage they need to buy (for the reason above) - and I have both EMC and NetApp cases?

Solving this problem, and doing it efficiently is the center of a lot of discussion around here.

BTW - in your NetApp testing and documentation what are you folks using for generating mass workloads to emulate the full desktop user experience. What did you do that was the analog of the AutoIT tool we used with VMware in this exercise? I noticed that in your reference architecture (which I read thoroughly), the sizing guidelines are 100% focused on capacity-oriented sizing.

Looking forward to discussing at VMworld!

Chris Gebhardt

Chad I would be happy to talk with you at vmworld about our workload we used for our VDI testing. Hint: Its the same one you used!!!

Abhinav Joshi

@Chad

As global partners, we are under NDA with VMware to not disclose the tools we used simulating workload, but all tools and testing results were validated by VMware prior to their co-branding the document.

For architecting any VDI solution, you need to factor in both capacity and performance. If we did only capacity based sizing, we would have required even fewer spindles than what the Ref Arch shows due to the multiple levels of storage efficiency available with the NetApp VDI solution. The Ref Arch document uses an average of 8 IOPS per VM as the basis of this storage architecture, although we’ve showing that the architecture and pricing works for mixed VDI workloads from 2 to 12 IOPS (which would allow multiple end-user types). Hosting these users in the large aggregate provided flexibility to leverage all the pooled IOPS on demand, to be used by different users on demand. ONTAP Intelligent caching &/or the PAM module compliment deduplication and helped reduce the number of disks required for this solution.

Chad Sakac

@Chris, @Abhinav - thank you for the dialog. Chris - looking forward to VMworld where we can share further. It wasn't a "loaded" question on my part - this workload generation was relatively difficult, certainly non-trivial. Even in the current environment, we didn't determine the impact of full offline rebuild of the OST - so more work to do on our part.

In all seriousness, I think it's good (if we can agree) to use common test harnesses - comparison purposes (though that is also useful), but rather to combine efforts in this space - it hasn't been easy.

We are of course a VMware partner as well, and there was no NDA issue as we co-branded the documentation, but I know that when we started that was discussed. Perhaps another thing we can work on together.

Now - Abhinav - the performance discussion here is instructive. Earlier in the thread, we were ridiculed for "1280 users, roughly 100 spindles in a wide thinly provisioned pool". I've acknowledged that I think we were still a little conservative.

look at the math - if the average user "guest IOps" is 8 - and you have 1280 - that's 10240 IOPs, which if you have a 180 IOps 15K drive is 57 drives - certainly not "wildly off" the 100 we were excoriated for. In the "mid-range" reference architectures we've done collaboratively with VMware - we used FAR fewer drives (that workload would drive 25 15K drives based on those docs), but I think those might have been originally too low. Those tests and documents are being updated now.

Now - the 57 drive config doesn't assume any array overhead, RAID effects (which increase the number) or caching and intelligent array IO handling (which decrease the number). Each vendor has various things that help with the former and the latter, and it's good to discuss what we're doing on those fronts. I think it's more useful to discuss them in a empirical way comparing against oneself rather than against others (because they are SO different).

In our testing (with our intelligent cache), certain IO operations are dramatically improved (basic guest boot operations), particularly when you have common storage blocks being referenced many times (as is the case with a View Composer design that has a base replica) - but other operations were assisted far less by cache.

Examples included:

1) initial guest swap creation - can be mitigated by pre-configuring swap to a large fixed value),

2) AV use case (loads of reads against things beyond the common boot image) - can be mitigated through the use of CIFS redirection and NAS-based Anti-Virus - as the Celerra can do this, I'm sure NetApp can also). Sometimes this isn't an option - example I noted (check-in/out - that today is dependent on user VMware View data disks).

3) Base image patching (loads of writes against boot and application) - can be mitigated only if changes to process/mechanics are considered in the design.


Long and short - the performance question is relatively complex. NetApp and EMC both have technologies which can assist here, and I think customers benefit from these efforts on both sides.

But - this was the root of my comment - customers in the "1000's of users" range are really scratching their heads about how to:

a) get the performance and scale they are looking for (possible)
b) get the availability profile they need (while a single desktop is a "tier 6" use case, when you put thousands on a single platform - in total, they better have a "tier 1" availability profile. If they all go down at once (where the network and storage subsystems are the key - VM HA will handle ESX host failure)
b) with the right economic model (harder)
c) in a broad set of use cases (even harder)
d) while minimizing changes to how they manage and maintain clients (this is hardest of all)

I think "order of magnitude" improvements will come only when changes to patch management, application deployment are not sacred cows, but are on the table of design as customers think about new ways to deploy, manage and maintain client workloads. That last part is about a LOT more than the "brown spinny rust" thing business.

BTW - my comment about not seeing performance data was rooted in looking through the NetApp 2000 user reference architecture document: http://www.vmware.com/files/pdf/partners/netapp/netapp-2000-seat-vmware-viiew-TR-3770.pdf

Now I see that Abhinav and Chris also co-authored this document:
http://media.netapp.com/documents/tr-3705.pdf

A quick glance shows that TR-3705 does a lot more proscriptive analysis than TR-3770 on the performance considerations (though guys, the comparison table on pg 18 makes me cringe a bit - while true that RAM-based things like PAMs, cache, and solid-state disk are awesome relative to magnetic media on space/power - in the context of a use case - the data can't be directly compared as sometimes you pass through the deduplicated cache - BTW - in VMware View composer cases, that "block commonality" exists on any cached device for the base replica and linked clones).

But I understand the pressures to include marketing positioning info in these docs (we do it too).

-------

Gang - one other note here is one that I think is important - I've tried continuously to stay "above the fray" - not saying "NetApp bad" in comments, and even right out of the gate, Chuck's post certainly didn't compare relative to anyone else. While some comments have been less than constructive (Vaughn, Alex - again, this is just looking through MY eyes - readers can judge for themselves), many of the NetApp folks comments have been constructive (Chris, Abhinav, John F).

In fact, even through background followup with John F, we discussed my "going wide (possible but hard)" was rooted in the fact that I was playing with Filerview (to learn and understand better only - I'm not part of any "competitive team"), and incorrectly assumed that would be the GUI customers would use, as I'm used to EMC approaches (you manage the array via the GUI presented by the management interfaces).

While I was correct that Filerview GUI does not enable the ability to create an aggregate larger than 28 drives (currently, I'm sure this will change over time). It's moot according to my respected NetApp colleagues, as they have indicated that the users would more likely use System Manager or Operations manager which apparently allow this (I can't comment, as I don't have experience with those tools).

This is the hazard of direct comparisons, and why I would posit that EMC folks are not in a position to comment in an educated fashion on NetApp from a technology perspective, and the reverse is also true.

Abhinav, Chris - looking forward to discussing at VMworld - and I've always been open to collaboration, with no fear. Solving the customer challenges is far more important than petty back-n-forth.

Thomas

Isn't it interesting that people brandish NDA's only when they don't want to tell you something?

Chuck Hollis

@everyone

Thanks to Chad for providing a level of conversation and dialogue that I could not. My expectation is that intelligent and polite discussion is always welcome here, so let's aspire to that goal, yes?

Chad's most fundamental point (and mine as well) is that when you consider a modern enterprise that's considering putting several thousands of its most productive workers on VDIs, architecture and design concerns do shift in a more interesting direction.

And anyone who has had experience with 10,000 3270 devices on an SNA network plugged into a few mainframes should feel a moment of deja-vu here.

-- Chuck

The comments to this entry are closed.

Chuck Hollis


  • Chuck Hollis
    SVP, Oracle Converged Infrastructure Systems
    @chuckhollis

    Chuck now works for Oracle, and is now deeply embroiled in IT infrastructure.

    Previously, he was with VMware for 2 years, and EMC for 18 years before that, most of them great.

    He enjoys speaking to customer and industry audiences about a variety of technology topics, and -- of course -- enjoys blogging.

    Chuck lives in Vero Beach, FL with his wife and four dogs when he's not traveling. In his spare time, Chuck is working on his second career as an aging rock musician.

    Warning: do not ever buy him a drink when there is a piano nearby.

    Note: these are my personal views, and aren't reviewed or approved by my employer.
Enter your Email:
Preview | Powered by FeedBlitz

General Housekeeping

  • Frequency of Updates
    I try and write something new 1-2 times per week; less if I'm travelling, more if I'm in the office. Hopefully you'll find the frequency about right!
  • Comments and Feedback
    All courteous comments welcome. TypePad occasionally puts comments into the spam folder, but I'll fish them out. Thanks!