| 0 comments ]

This is a walkthrough for HP Command View for EVA. Part of my daily routine is to take a jaunt through CV, to check things over and look for alerts that I may not already be aware of.

Launched from the shortcut on the Windows host, or is accessible via a web browser at https://servername:2372

You are presented with the login screen:

It is here that you will also see the version number (down below). We have not yet upgraded this server to version 9.2 as we are building a VM to run Command View from and retire this physical box.

After login, you are greeted by the overview page. Here you will see all the EVAs listed, as well as the stats for your overall environment. The first thing of note is that two of my EVAs have bang lights on them, indicating something is amiss. I’ll investigate both as part of my next posting.

On the top right, there are some hyperlinks, your login id, and the ip of the Command View server. Most are self explanatory. Server Options provides for a place to enter license codes, setup RSM relationships, and a few other features. I seldom visit this page, so like once a year?

Moving right along, let’s take a look at a healthy EVA, in this case DS-SAN-2:

As you can see from the above, a healthy EVA has a number of folders beneath it. It breaks out into many subfolders. On the right hand side are the numbers. It would be here that you can get the logs for HP support (I’ll cover that in a separate blog post). You can see the current capacity level, view the Version level (this is the XCS code release running on the EVA. 6220 is the latest for the 8100 series). The left column is broken down like such:

Virtual Disks – this is where the luns (vdisks in EVA-speak) are listed. The folder structure is entirely man-made. That is to say, it’s for human organizational purposes (and plays an important role for setting up RSM jobs. More on that will be covered on a separate posting).

Hosts – here is where hosts are setup in Command View. You will provide the host’s OS, and it’s WWN’s for fiber cards. Hosts MUST be setup on every single EVA that you want to present disks from. Annoying, I know.

Disk Groups – these are comprised of physical disks. Best practice says to build these in multiples of 8, and of all the same speed and size. You can choose to not follow these and your performance will suck majorly. I’ve worked on rebuilding two of the 4 EVAs that had improperly constructed Disk Groups. It is PAINFUL to correct, but I’m glad I did. I will cover that also as a separate blog post. FYI: The ungrouped disks folder is for disks that have failed or been ungrouped on purpose. Ungrouping takes time as the EVA moves data from the drive to free it up to be removed or replaced.

Data Replication – it is here that you can create DR groups. This allows you to replicate (synchronous or asynchronously) between EVA arrays. A replication group is comprised of 1 or more vdisks. Sounding like a broken record, I will have a separate posting on replication.

Hardware – it is here that you can check out the status of the hardware. Both controllers are listed, as are all the disks. If there are hardware issues (per a bang light) then you can come here to find out why. The status of failed items is usually fairly straightforward and understandable as to what happened and what should be done.

About Brian

Brian is a Technical Architect for a VMware partner and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status for 2012 & 2011. VCP3, VCP5, VCA-DT, VCP5-DT, Cisco UCS Design

0 comments

Post a Comment