Saturday, February 28, 2009

MS Exchange P2V

I've been prepping for this one for a long time. Anyone can vouch for the level of importance that email systems play in today's world. Several months back during planning for 2009, my Exchange 2K3 server came up as hitting its end of hardware warranty. While looking into extending it, I was not excited with the cost compared to going virtual(on ESXi).

My Exchange environment is simple - 1 server, less than 200 users, OWA, & OMA. While this is 'simple' it means everything rides on just one box. Sometime down the road it'll move toward a multi server, split role configuration.

Given the high importance of email and the relatively 'young' status of the physical-to-virtual process I had concerns using P2V on Exchange Server. After much searching and reading, largely through the VMware communities, I gained confidence that if done properly it would work well.

The Keys to Exchange P2V
  1. Testing - build a test Exchange Instance and utilize tools like JetStress
  2. Backups
  3. Consistent Exchange DB's
  4. Minimal amount of System activity during P2V
  5. Highly documented process/plan

The summary of my procedure was:
1) Full Backup
2) Cut off Net Access
3) Disable client related services
4) Incremental backup
5) Disable ALL non system critical services
6) Robocopy (local) data drive (Exchange databases) to iSCSI drive & disconnect
7) P2V the system drive then shutdown physical box
8) Bootup the VM, configure the VM network devices, and connect iSCSI drives
9) Startup services and check event logs after each service startup
10) Test "Internal" Clients
11) Open up external client access and Test
12) Full Production
13) Full Backup
14) Monitor, monitor, monitor

The entire process went smoothly and the new Exchange VM is functioning perfectly. Hopefully this is useful for those unsure about running Exchange Server in a virtual environment or considering an Exchange P2V.

Net positives of this operation include: Less power draw, longer UPS standby capacity, less heat generation, free'd up rack space, better performance, simplified DR, reduced maintenance costs, no client setting changes, etc...

Drawbacks - (Yawn) a late night of lost sleep to catch up on.

Side Note: OEM licenses are not transferable to different hardware - you must have a non-oem license for the new virtual system if the physical box was OEM licensed.

Some helpful resources I'd found:
Article 1
Article 2
Article 3(PDF)

Stumble Upon Toolbar

Tuesday, February 24, 2009

Citrix XenServer - Now Free

A Slashdot article yesterday afternoon points out that Citrix has made their XenServer product freely available. Big news for Xen fans or those still on the bubble on which virtualization platform to adopt. The XenServer product includes many of the advanced options that VMWare charges handsomely for in their VI3 product line (XenMotion, Templates, Centralized Management, etc).

As the Slashdot article also mentions this announcement comes on the crest of VMWare's flagship event - VMWorld. VMWare users should be biting their proverbial nails with anticipation of what VMWare's response to this will be (I know I am). Many have predicted the pending commoditization of hypervisors followed by their advanced features and I believe that time is here.

Stumble Upon Toolbar

Saturday, February 21, 2009

Article: How VMware Writes I/O to Disk

Mike D has a blog entry pointing over to a KB article laying out how the various VMWare products handle disk I/O (depends on the product and host OS!). If you've ever been curious / concerned(you should be) about I/O from Guest to Hypervisor (to host OS) to hardware and consistency in the event of a physical crash, it's a great read. VMWare product users on Linux / Mac hosts take note!

Stumble Upon Toolbar

Tuesday, February 17, 2009

New Poll: Virtualization Platform of Choice

It's been quite some time since I've put up a poll - so here's a new one. There are a plethora of solutions when it comes to selecting a virtualization platform. I (and others) are interested in hearing what your selection for use in the datacenter is(was) and why you chose that route or possibly changed solutions. VMware, one of the long time incumbents in the x86 virtualization arena, I'm guessing is going to be favored (for various reasons).

I'll cast my vote and thoughts. I'd used VMWare products of the workstation variety "many" years ago in order to run Windows and linux concurrently on the same laptop. The early iterations of their products I thought were OKfor desktop/workstation use but I couldn't see using them in the datacenter. Through a couple versions of the workstation product I observed significant improvements, then for a few years virtualization was off my radar for various reasons.

2007 planning begins and as I look at my replacing my aged equipment Virtualization comes back on the radar. The decision is made to 'crawl' into virtualization, in the datacenter, and use the VMware Server 1.x product. Into 2008 and VMware Server 2, virtualization is here to stay. 2009 - new storage, server capacity needs, and the availability of ESXi (free) have found me on the ESXi platform with an excellent commercial hardware and OSS software SAN. My environment is 'small' thus ESXi free is an excellent fit that allows for adding centralized managment and other advanced features when the need arises.

My long time history using VMware, seeing its progress, and the maturity of its current solutions made it my top selection. Other solutions are still in their infancy, some too far away from the metal, some lacking ease of management, corporate stability, cost, etc.

Stumble Upon Toolbar

Monday, February 16, 2009

NFS Performance Tips

Since I use NFS in my environment, I look around every now and then for tidbits on howto make it better perform. I ran across this article titled NFS for Clusters. I'm not running any "Clusters" that utilize NFS storage, but the concept of a well running NFS service applies everywhere. The article contains some key diagnostics to help determine where NFS may need some tweaking (and how to interpret those diagnostics).

Stumble Upon Toolbar

Tuesday, February 3, 2009

Windows Vmware Guest Time Sync KB

Accurate time is critical in computing environments - logging, compliance, audit-ability, etc. VMWare has an nice short KB article on configuring Windows guests systems time sync, including the case of Domain controllers. A good read considering the mischief an 'off' clock can cause.

Stumble Upon Toolbar

2009 SAN Project

I wrote earlier about investigations into a SAN solution for 2009 and then a bit later about the eventual decision. The summary of those two blogs is that numerous solutions exists and that commercial "boxed" solutions were not the right fit for my employer. My previous SAN was a linux based Thinkmate system loaded up with SATA drives and two 3ware controllers. It was great from a capacity standpoint, but despite any configuration adjustments, just could never provide adequate IO to support more than 5-6 VM's along with file serving - over iSCSI. It is also at the end of its warranty and Thinkmate does not offer warranty support past 3 years.

My new solution is as follows. Equipment is from Dell. A PE 2900 Racked as the storage controller. Yes it is a big box, but it offers 4 PCIe slots: (2) Perc 6E 512MB controllers, and (2) Dual port Nics. The box will run CentOS x64, NFS, and iSCSI. One MD1000 array shelf full of 15K SAS drives to host VM images, and another shelf of 15K SAS for various data volumes. VM's will be shared using NFS out to ESXi hosts. Use of NFS with VMWare is mature and will allow for backup of the VM's via the storage controller and some minor scripting - no special backup licenses, etc. VM's will run software iSCSI initiators when needing access to data LUNs sliced from the MD1000 "data" array. LAN, NFS, iSCSI are all separate networks, VLAN'd on 2 switches with each server having 2 connections to each network(one on each switch).

How does it work? Let's say I'm pleased with the results and completely comfortable with the decision versus a boxed all-in-one SAN solution. Link fail-over is beautiful, IOPs are fantastic, storage capacity is up.

Stumble Upon Toolbar