The basic "Server Building Block" for your virtual infrastructure
by Johan De Gelas on October 7, 2009 12:00 AM EST- Posted in
- Virtualization
If you read our last article, it is clear that when your applications are virtualized, you have a lot more options to choose from in order to build your server infrastructure . Let us know how you would build up your "dynamic datacenter" and why!
{poll 157:440}
48 Comments
View All Comments
JohanAnandtech - Thursday, October 8, 2009 - link
So you use ESXi for production? Do you manage your servers by RDP/ssh sessions? I can imagine that is still practical for a limited amount of servers. How far would you go? (Like how many servers before you would need a management software?)crafty79 - Friday, October 9, 2009 - link
We use esxi in production too. Of course we have VirtualCenter managing it though. Why wouldn't you use esxi? 10x less patches, fully managed remotely through api, more resources available for vm's.Ninevah - Friday, October 16, 2009 - link
How do you do the licensing for those ESXi servers with VirtualCenter? I thought the centralized licensing system with vCenter required 1 license file with all the ESX licenses in it in order for vCenter to manage all those servers.xeopherith - Monday, October 12, 2009 - link
I have one linux server so far and I have always managed all my machines through some kind of remote whether that means SSH, terminal services, whatever. I don't see why that would be a disadvantage.I'm working through the couple of problems I have run into so far and I would say it has been pretty successful.
The only real difficult part was that except for two all my machines were moved from real to virtual. The couple of problems I have run into seem to only be because of that. Network performance and disk performance. However changing some of the settings seemed to resolve it completely.
monkeyshambler - Thursday, October 8, 2009 - link
Personally I'm not the greatest fan of virtualisation as yet but going with the theme it would have to be dual socket rack servers alternately kitted out with either SSD disks (for database server virtualisation) and large capacity disks (450 - 600G 15k's) for general data serving.I think you still get far more value from your servers by separating them out into roles. e.g. database, webserver, office server.
Admittedly the office servers are excellent candidates for virtualisation to their often low usage.
The key to many systems these days is the database so making a custom server with solid state drives really pays off in transaction throughput.
I'll buy into the virtualisation more thoroughly when we can partition them according to timeslice usage so we can guarantee that they will always be able to output a certain level of performance like mainframes used to provide.
9nails - Wednesday, October 7, 2009 - link
Blade servers can get me more CPU's per rack unit, but really, CPU's aren't the bottle neck. It's still disk and network. With that in mind, I can get more cards into a bunch of 2U rack server than I can a blade server chassis. And with more I/O and network to my servers backups aren't the big bottle necked on the wire as they are with blades. My hero is not the servers but the 10 Gb FCoE cards.Casper42 - Wednesday, October 7, 2009 - link
Our upcoming design is based on the following from HP:c7000 Chassis
BL490c G6 (Dual E5540 w/ 48GB initially)
Flex-10 Virtual Connect
Storage we are still debating between recycling an existing NetApp vs saving the money on the FC Virtual Connect & HBAs and spending it on some HP LeftHand iSCSI storage.
A little birdie told me next near you will see an iSCSI Blade from LeftHand early next year. Imagine an SB40c that doesnt connect to the server next door, but instead has 2 NICs.
So when it comes to "Virtualization Building Blocks", you can mix and match BL490s and these new iSCSI Blades.
Need more CPU/RAM, pop in a few more 490s.
Need more Storage, pop in a few more LeftHand Storage Blades.
With expansion up to 4 chassis in 1 VC Domain, you can build a decent sized VMWare Cluster by just mix and matching these parts.
Outgrow the iSCSI Blades? You can do an online migration of your iSCSI LUNs from the Blade storage to full blown P4000 Storage nodes and then add more 490s to the chassis in place of the old Storage Blades.
This allows you to keep your iSCSI and vMotion traffic OFF the normal network (keeping your Network team happy) and still gives you anywhere from 10 to 80 Gbps of uplink connectivity to the rest of the network.
Now if you really want to get crazy with the design, add in the HP MCS G2 (Self Cooling Rack) and you can drop a good sized, very flexible environment into any room with enough power and onjly need a handful of Fibre cables coming out of the cabinets.
mlambert - Thursday, October 8, 2009 - link
Casper has the basic idea (C-class with BL490's) but I'd go with FC VC along with the 10Gbit VC, scratch the LH iSCSI and go with NTAP NFS for every datastore except for your transient data swap VMFS (fc makes sense for these). Use the FC for SAN boot of all ESX hosts and any possible RDM's you might need. Toss in SMVI for backup/restore + remote DR.You could stay cheap and go with normal 16/32port 4gb Brocades to offset the Nexus 7000's with 10Gbit blades.
FAS3170c's with the new 24 disk SAS shelves and all 450GB disk. Maybe a PAM card if you have a bunch of Oracle/SQL to virtualize with heavy read IO req's.
Thats about it. Really simple, easy to maintain, basic array side cloning, 100% thin provisioned + deduplicated (besides the transient data VMFS), with built in remote site DR as soon as you need it.
rbbot - Tuesday, October 13, 2009 - link
I've heard that you have to have local storage on the blade in order to have a multipath connection to the SAN - if you use SAN boot you are stuck with a single path. Does anyone know if this is still true?JohanAnandtech - Friday, October 9, 2009 - link
"scratch the LH iSCSI and go with NTAP NFS"Why? Curious!