We originally rolled out our HP C-Class blade infrastructure with Cisco network & Brocade SAN switches, and I’ve blogged about some of the challenges of lots of blade switches. Today, I’m going to talk about how the HP Virtual Connect equipment solved a lot of those problems for us and gave us a more flexible, easier to configure blade system.
When blades first started rolling out, we had two ways of connecting them to networks and storage:
- Switches – just like regular switches, except they slide into the back of the blade chassis. They have internal wiring to each blade, and a few network ports in the back to connect to the rest of the datacenter’s switches. These ports can be used as uplinks, or other devices can be plugged in as well, like iSCSI storage. This solution is hard to maintain because it takes specialized Cisco knowledge (or whoever’s switches are used) and they’re not typically integrated with the blade chassis management.
- Pass-throughs – a device in the back of the blade chassis that just had one network port for every single blade with a network connection. In the case of a fully populated C-Class chassis, that’s 32 network ports just for the 2 onboard NICs in each server, let alone any additional mezzanine NICs. This solution is much cheaper in terms of blade equipment, but it’s a cabling nightmare, and every time the admin needs to change a network port, they have to walk into the datacenter and manage the cabling.
HP’s Virtual Connect offers a new hybrid solution that takes the best of both, and offers some new abilities that aren’t available with either of the previous architectures.
Virtual Connect is a Smarter Passthrough
In a nutshell, the Virtual Connect module dynamically passes any network port’s traffic through to another network port. It’s a smarter passthrough that can aggregate traffic from multiple servers into a single uplink or multiple uplinks.
VMware administrators will see Virtual Connect management as something extremely similar to the network management built into VMware. Think of the blade chassis as being the VMware host: a couple of network cards can be configured with trunking to support multiple guests, each with their own individual VLANs. The Virtual Connect gear acts just like a VMware host would, adding the appropriate VLAN tags to traffic as it exits the VirtualConnect module and goes up to the core network switches.
Virtual Connect can also handle VMware servers inside the chassis, passing even VLAN-trunked traffic through to the servers. Be aware that this is does require some deeper network knowledge, but shops running VMware hosts can easily handle this. I’ve done VMware administration part-time at our shop for the last couple of years, and I was easily able to configure Virtual Connect for VMware hosts.
Here are a few links I found really helpful with Virtual Connect and VMware:
- VMware ESX Server networking with HP Virtual Connect by Scott Lowe
- HP VirtualConnect Clarification by Scott Lowe
- And yes, Scott Lowe’s blog in general
When two Virtual Connect modules are plugged side-by-side into the same blade chassis, they have a built-in hard-wired 10gb crossconnect link between them. This allows for some amazing failover configurations. We wired both modules up to the same datacenter core switch, and set up a single virtual uplink port across both Virtual Connect Modules. The Virtual Connect will automatically push all of the traffic through one side by default, but if that uplink fails, the traffic will automatically switch over to the other module’s uplink – completely seamlessly to the blade server. That’s something we couldn’t even do with our Cisco gear.
Virtual Connect beats regular passthroughs in another way: VC dramatically reduces the amount of cabling required. Set up the Virtual Connect uplinks once with just one or two uplink cables per switch, and only add additional uplinks when performance requires it. Instead of two uplink cables per server with traditional passthrough solutions, Virtual Connect requires as little as two cables per sixteen servers! Of course, most shops will opt for at least a couple of additional cables for redundancy and performance, but it’s an option instead of a 32-cable requirement.
Virtual Connect is a Simpler Switch
Like the rest of our Wintel server management team, I don’t know anything about managing Cisco switches, and I’m not about to learn at this stage in my life. Therefore, I was exceedingly happy to open the Virtual Connect page and see this:
The Virtual Connect web user interface looks and feels exactly like the rest of HP’s management tools like System Insight Manager, the iLO2, and the C7000′s Onboard Administrator. Server managers will immediately feel comfortable with the wizard-based UI that can be used without any training. If you’ve managed HP servers, you can manage HP Virtual Connect.
That’s not to say that users shouldn’t read the documentation carefully when deciding the initial infrastructure: like switches, the Virtual Connect modules can do some powerful stuff, but it takes planning and forethought.
Faster Rollouts
Most of our blade network connections consist of a few simple profiles:
- Basic server – two network cards both on the server subnet, using failover between the two
- Clustered server – the basic server, plus two network cards on a heartbeat subnet
- iSCSI server – the basic server, plus two network cards on an iSCSI subnet
- VMware server – a specialized configuration with traffic from multiple VLANs
We roll out these same types of servers over and over, but with conventional switchgear, every server rollout was like reinventing the wheel. We had to double-check every network port, and human error sometimes delayed us by hours or days.
Virtual Connect brings a “Profile” concept to switchgear: we can set up these basic profiles, and then duplicate them with a few mouse clicks. A junior sysadmin rolling out a new VMware blade doesn’t need to understand the complexities of trunked VLAN traffic, a dedicated VMotion nic, and so on – they just use the custom VMware profile we set up ahead of time, and all of the network ports are configured according to our standards.
Since the Virtual Connect infrastructure is managed by the same staff who do blade implementations and rollouts, there’s no delays waiting for the network team, no failures in communication, and no finger-pointing when configurations go wrong. Blade rollouts are handled entirely by one team start to finish.
Easier Recovery from Server Hardware Failures
We haven’t implemented boot-from-SAN yet, but with the VC infrastructure, I can finally see a reason to boot VMware and Windows servers from the SAN. Virtual Connect manages the MAC addresses of each network card and the WWN addresses of each HBA, remapping them to its own internal database (or the company’s chosen list).
In the event of a blade hardware failure, like a motherboard, the system administrator can simply remap that blade’s network profile to another blade and start it up. The new blade takes over the exact same MAC addresses and WWN’s of the failed blade, and can therefore immediately boot from SAN using the failed blade’s storage and network connections! That gives administrators much more time to troubleshoot the hardware of the failed blade.
With this kind of flexibility, we can justify having a high-performance blade as a standby, ready to recover from any blade’s hardware failure. The cost on this is relatively low, since it acts as a standby for all of the blades in any chassis.
Easier Hardware Upgrades
Along the lines of hardware failure recovery, VC also allows for easier hardware upgrades & swaps. If the company goes with a standard blade hardware rollout (like all Intel 2-socket blades, or all AMD 2-socket blades), this means that hardware upgrades can be done in a single reboot:
- Ahead of time, build a new blade with the desired new configuration (more or faster CPUs, more memory, etc). Burn it in and do load testing in a leisurely manner, making sure the hardware is good.
- Shut down the old blade.
- Using Virtual Connect, copy the old blade’s profile over to the new one. Takes a matter of seconds, and can be done remotely.
- Boot up the new blade.
Taking this concept to an extreme, one could even use this approach for firmware upgrades. Upgrade a standby blade to the latest firmware, burn it in to make sure it works, and then do the hardware swap. (I wish I would have done this recently – I had a SQL blade firmware go wrong, but thankfully it was in a cluster.)
Simple Packet Sniffing Built In
When we run into difficult-to-solve network issues, sometimes we rely on our network team to capture packets going to & from a machine in question. I was pleasantly surprised to find that the Virtual Connect ethernet modules have the ability to mirror a network port’s traffic to any other network port. We can set up a packet sniffer on a blade, then use that blade as a diagnostic station when another blade is having network-related problems. For even more flexibility, we can take a laptop into the datacenter, plug it into one of the Virtual Connect’s external ports, and set that port up as the mirror.
Is this something the Cisco switchgear can do? Absolutely, but it’s not something that a Wintel server administrator can do with Cisco switches. I would never dream of trying to set that up on a Cisco, but with the HP, it only takes me a few mouse clicks in a web browser.
The Drawbacks of Virtual Connect
HP’s question-and-answer page about Virtual Connect points to one political challenge: the network team may not like bringing a new network technology into the datacenter. When given the option between buying their standard switches versus the new Virtual Connect switchgear, they’re probably going to prefer the former. For me, the important message was the ease of configuration, and getting everyone to see it as an extension of the blade system’s capabilities instead of an extension to the core network switch’s capabilities. Virtual Connect is a big piece of what makes blades faster and easier to roll out than conventional services. It’s a part of a larger picture, part of a new way to implement server infrastructures.
The second challenge is that to see the real benefit, organizations need to have Virtual Connect modules in every blade chassis. That way, administrators can transplant profiles and servers from one chassis to another, giving the best flexibility. To do that, chassis buyers need to take the first leap and buy Virtual Connect modules in their first blade chassis. Otherwise, they probably won’t go back and retrofit each pre-existing chassis with the Virtual Connect modules, especially since they’re more expensive than the traditional Cisco switches.
Finally, yes, the Virtual Connect modules are somewhat more expensive than the Cisco blade switches. It’s odd for me to think of something more expensive than Cisco, but having worked with both the HP Virtual Connect modules and Cisco switches, I can completely understand why they’re worth the additional price.
For me, the return on investment is clearly there: blades are all about faster rollouts, a more flexible infrastructure, and higher uptime. The HP Virtual Connect system delivers on all three of those goals, and I would recommend it for any shop building an HP blade infrastructure.
Want to Read More About My HP Blade Experiences?
Here’s a couple more related posts:
- HP C-Class Blade Chassis Review Part 1 – about the chassis and blades
- HP C-Class Blade Interconnects Review – about the Cisco and Brocade switches
0 comments
Post a Comment