Tuesday, December 22, 2009

How I Became a Sysadmin

An interesting article was posted on the SysAdvent calendar titled "Becoming A Sysadmin". After reading it, I thought why not put up my story. As the article noted the path to becoming a Sysadmin is not a 1-2-3 step process, it is most often an evolution of roles with a pinch of circumstance. That evolution of roles part lends itself to the vague definition of what a Sysadmin's responsibilities are and as the article mentions putting "Sysadmin" in a job description's responsibility list is no indicator of actual responsibilities. Another part of the article raises the point about what characteristics in a person makes them want to be or enjoy being a Sysadmin. I'm hoping this doesn't read too much like a resume.

I'll start at the foundation question - what characteristics drive a person to enjoy or want to be a Sysadmin? I think creativity is one trait of SysAdmins. Creativity plays a part in may aspects of the job: writing scripts / code, problem solving, designing solutions for networks, storage, etc. I've always enjoyed building things - from Legos when younger, to metal fabrication, and even homes. Of course some desire for control needs to exist, but I think the higher purpose analogy is more like a coach or co-captain guiding a team towards a common goal. Sometimes you get to pick (aka design) your team, and sometimes you have to work with what you are given. This may just be my experiences, but a good many of the 'tech' people I know have a strong liking of music - in fact many have even played or play instruments. I stopped playing saxophone going into college because it was a major time involvement - I do miss playing though.

I've heard IT folk tend to stay in a job or position for maybe 5 years or so. Why this is tends to be answered by the IT person's need for new challenges. It is after all one of the draws of technology - ever changing; sometimes one of its drawbacks too. My 12 year history in IT work, the only professional arena I've worked in, follows the pattern fairly well - but mostly due to life circumstances and not my sole decision to change things up.

Onward with my path. I went through college on the Computer Science / Mathematics major/minor track. I started my first 'career' oriented job going into my 3rd year of college. This position was as a programmer in an engineering department using SGI Irix workstations. It was an incredibly exciting role at the time and provided me the opportunity to learn many technologies, as well as my likes and dislikes regarding IT roles. Besides the skills I took away from the job, I found that for me while creating code was OK, I truly enjoyed networking and systems management much more. I was also during this time that I developed a strong skill set with Linux.

I moved onto my next position due to my wife's pursuit of her masters degree at a university in another city. My *Nix background landed my next position managing a HP-UX based environment running SAP on Oracle using some nice EMC storage. I worked with some great people in this position, but it was definitely a very specific set of tasks and the challenge quickly faded. I learned a lot about SANs, a bit about SAP and Oracle, expanded my network skills, and benefited personally by observing great management. I was made redundant in a post acquisition event and subsequently laid off.

I spent the following few months getting my resume out, networking and mowing lawns - which I actually enjoyed for the time I did it.

And that gets to my current and longest held position. I became the "Systems Administrator" mid 2004 at a smallish company (150 people). It is quite the opposite in the realm of responsibilities - my saying is now: "If it plugs into the wall, rings, buzzes, beeps, or takes batteries I'm responsible for it.". It was a very simple environment that provided me lots of opportunity to advance my Windows based management skills while significantly improving operations efficiency, security, and reliability. The business has been in a constant state of positive change and the challenges have been great. Today I work along side a (SAP) Business Analyst, and another person who specializes in Application support and training. I frequently interface with the parent company IT group in Japan as well as a few other US sister companies. I travel very lightly which allows me lots of time with my young children at home and just enough away ;)

That's my story and I'm sticking to it - evolution of a Sysadmin.

Stumble Upon Toolbar

Friday, October 30, 2009

New Location ToDo List

The 'new' site I've been working with over the past several weeks has several traits I would say are pretty typical. A small, very successful business is acquired / merged with another that is owned by a fairly large parent that also happens to own my place of $WORK. I've had the pleasure of being involved to help get their infrastructure, projects, strategy, and support into a more organized, best practice, and business aligned status.

There has been a lot of learning from all sides up to this point. As I had mentioned most of the items needed attention are not surprising knowing the company was a grassroots effort that has grown incrementally over the past 10 years.
  • VOIP and computer data are traversing the same network
  • Some of the switching equipment is non-managed grade stuff
  • Switches are piggy-backed(Did I mention they do CAD work)
  • The primary IT person wears many hats and is severely overloaded
  • A new business software is mid implementation
  • The primary IT person is going on maternity leave within the next month
  • There is one server doing everything
  • Telco and Internet services are not cost effective / lined up with the business needs
  • No automation - Updates, software installs, PC imaging, etc
  • Remote access needs some help
  • Lots of processes need documentation / standardization
  • No tracking of support issues / history
The challenge for me has been uncovering all the details of the organization and infrastructure to ensure the decisions to right things going forward are sensible. Thankfully I have a good team at $WORK, supportive management, and plenty of great people at both locations. This is not an open call for vendors. Solutions to most of the above items have already been devised and some implemented - the timetable has been the biggest challenge to meet.

Stumble Upon Toolbar

Monday, October 12, 2009

Latest Happenings

The last several weeks have been an extremely busy time. I've taken on the responsibility for a recently acquired sister company's technology and people. The site has a lot going on to say the least. Maybe a "Perfect Storm" of needs, changes, growth, and projects would be a well suited analogy. It's a very exciting opportunity for everyone involved that is going to take a lot of hard work - wish us luck!

Stumble Upon Toolbar

Tuesday, September 15, 2009

Trial Run: Aprigo Ninja

Fellow blogger Matt Simmons had previously blogged about "Ninja" by Aprigo. After bringing it up in passing again I thought it might warrant a try. Shortly after registering for the private beta, I was approved and provided access to the less than 5MB download.

I am not affiliated with Aprigo in any way - not a current customer, relative, friend to employee(s) there. My motivation for trying the product was its visual representation and analysis of data. I was curios how the volumes of file server data and underlying equipment that I manage was being utilized - beyond simple user quota figures.

Install was a breeze, the typical click, next, next, finish. At first run you have to setup a scan by pointing the application to a folder,share, or UNC path and providing a user friendly name. The trial is limited to 500GB per scan and seems to only be able to run one scan at a time - interactively.

Lots of interesting results are available after a scan is completed - most of them with 'drill down' capabilities to get the finer grained details. If a cost is entered, per GB/year, a dollar figure is displayed next to each result. This is great for seeing how much it costs to keep the different categories of data laying around. The non-trial/beta version appears to offer trending reports, cloud storage analysis and access management(very interesting!) as well. Aprigo Ninja is definitely a handy tool if you're getting going on trying to analyze your storage use or Windows File Server Resource Manager isn't cutting it (or you want to analyze data on samba shares). Give it a try, it's worth the small bit of time and effort.

Stumble Upon Toolbar

Wednesday, August 19, 2009

Free vSphere Videos / Demos

I picked this up on twitter via @DellServerGeek: Mike Laverick has generously posted many vSphere how-to demos on his website. This is a significant body of work to make freely available. Mike goes through the entire process from "soup to nuts". For anyone curious about setting up a vSphere environment these are a must watch! If you like the series - consider procuring his accompanying book. Thanks Mike!

Stumble Upon Toolbar

Tuesday, August 18, 2009

Free Storage for Your vSphere Lab!

VM Super Genius has a post with a link to free storage solutions that can facilitate trying out some of the more advanced features in VMware ESX / vSphere. As noted many of those features require shared storage (iSCSI, NFS, Fiber SAN) which can be prohibitively expensive for many. The list offers some zero cost appliances / applications that can provide iSCSI and/or NFS storage to virtual hosts. An excellent blog to follow if you are into virtualization.

Stumble Upon Toolbar

Tuesday, July 21, 2009

Playing with DRBD (Replication)

Over the past 4+ years I've worked towards moving the infrastructure at work from isolated physical systems to a centralized storage(SAN) and virtualized paradigm. I'm happy to say we are definitely a "Virtualize First" and SAN shop now. A conservative comparison of separate physical systems with local storage to the current SAN/virtualized environment shows a nearly 50% cost savings on equipment alone. The benefits of extra rack space, lower power and cooling costs, I/O and compute capacity, etc are nice as well.

This evolution has a next logical progression. Now operating in a very centralized fashion, I can (more easily) begin examining solutions for replication. The base goal was to replicate the SAN to mitigate a 100% loss of the primary server room and not require an enormous amount of time restoring from tape. Tapes are great snapshot points in time like photographs, but like photographs, trying to rebuild your entire set of memories by looking through photographs would be very time consuming (and photographs degrade). Replication is not a replacement for good backups.

From a previous post my SAN is linux based, much akin to an Openfiler solution. It provdes NFS storage to ESXi servers for VM images and iSCSI storage to the VM images that need data volumes. All physical drives (as present by the RAID controllers) are sliced up using LVM. It has been humming along without issue since installation in February 2009.

I have known of the existence of DRBD for several years but not been in a position to utilize it. In short DRBD is a block device you layer into your device chain, just like LVM. It's specialty is taking all the original block level changes, keep track of them, and send them over to another system where they can be duplicated. The DRBD website is excellent, I highly suggest spending a few minutes there. DRBD has a few very nice traits that I'd like to highlight. First off it is smart (and dumb?). It works at the block level and knows which bits may be out of sync and will only send those bits across the wire - it knows nothing of filesystems, files, etc. Secondly it can be non-destructively added to existing data volumes. There's no need to backup/install/restore. DRBD is opensource and freely available - but its creators and primary maintainers, Linbit, offer commercial support and have been around for awhile. Linbit also offers a closed source product, DRBD Proxy, that is designed for long haul, high latency(200ms) connections or greater than 2 node replication situations. If you want to replicate outside of a LAN using DRBD you'll need it. DRBD is also 'good friends' with Hearbeat for high availability / failover situations.

I setup a couple CentOS x64 based VM's for testing. DRBD is available via the standard CentOS repositories but it is naturally a bit behind the current version available directly from the DRBD website. The download is small and if you have a basic compiler toolchain, and kernel-devel package the build / install is quick and painless(make rpm). Did I already mention the DRBD website documentation is fantastic - really go read it. The required configuration to have DRBD work is quite minimum although there are lots of options to fine tune its operation. If your data's rate of change is very high, you will really want to have Gigabit connectivity between your nodes. What you'll find is your DRBD devices will only write about as fast the data can get across the wire (assuming your drives can outrun wirespeed). If you need more than wire speed and your drives are fast, take a look into the DRBD Proxy product. I spent a fair amount of time in different scenarios to see how DRBD would act and what to do as an admin in those situations. Like many things, with a little bit of time and reading, DRBD was easy to work with.

So what was I doing with all this again? The base goal was to replicate the SAN to mitigate a 100% loss of the server room. Since the SAN literally contains everything (VM's, SQL, Exchange databases, file shares) this was a fairly simple move that captures the entire datacenter to another system. To backpedal a bit, my environment is modest in size by any modern measure, but still just as important. That 'size', centralized storage, and a geographically large site made the option of placing the replica system in a local but 'distant' (fiber connected) building a perfect option. The replica runs ESXi with a CentOS VM running DRBD to replicate the data. Why ESXi on the host? What this more or less creates is my datacenter-in-a-box, transportable if needed. The CentOS VM will provide NFS access back to the host ESXi for access to all the server VM's which in turn will use iSCSI access their data. ESXi virtual switches let me create matching, non routed networks local to the replica host for the NFS and iSCSI traffic, meaning zero reconfiguration of the production server VM's. This isn't meant to be a powerhouse / failover solution. What it is, is a very cost effective solution to a worst case situation that hopefully doesn't occur. If the worst was to occur, some scripting magic transforms the replica to production status. When a new production environment is established, DRBD can be used to mirror the data back to it enabling a transition with very little downtime.

Tired of reading yet?

Stumble Upon Toolbar

Tuesday, May 5, 2009

Shared Hosting with Exchange 2003

This post is as much a placeholder for me of some information I found some time ago as well as something that may be useful to others.  In late 2007 I was tasked with providing "hosted" email services for a sister company out of my employer's Exchange 2003 environment.  OK I thought, this should not be a big deal. The system has plenty of capacity, bandwidth, and Exchange 2003 has to have some mechanism to pull it off.

I started off by setting up a small environment in VMware: Windows Domain, Exchange server, "Internal" client, "Internet" firewall, and external client.  All of these of course sat on private networks internal to the host server.  I installed all the same applications to best mimic the real world - Antivirus, Anti Spam, Archiving, etc.

So I have this environment all prepared with a few users at the "primary" email domain, now what?  I turned to one of my favorite exchange online resources: www.msexchange.org and without too much searching found Part 1 and Part 2 of Shared Hosting with Exchange 2003.  These are both well written articles that made the project pull off without a hitch.

I know Exchange 2003 is on its way out(long gone?) for many people, but at the same time I'm sure there are just as many Exchange 2003 environments out there that will be around for a handful more years.

One key note to keep in mind.  During planning to add valid 'foreign' user accounts to your active directory,  consider ways to prevent those accounts from accessing other resources tied into the active directory domain!

Stumble Upon Toolbar

Tuesday, April 21, 2009

Vmware New Release Day

It's a big day in the virtualization world. VMware is publicly releasing vSphere 4 today. Jason Boche has posted some initial information this morning on his site and VMware's site is of course buzzing with all sorts of new information. VMware will be putting on a simulcast at 9:00AM PDT/Noon Eastern - registration required.

Some of the highlight are in the pricing document (thanks to Jason for the link) which is a good read. Of interest: Pricing is based on per processing socket (with a defined limit of cores/socket), vCenter Server (Centralized management) is still an add-on purchase(in most cases), the SMB Essentials (& Plus) offerings appear to be some nice packages for the small business looking to consolidate 5+ servers with centralized management and basic high availability.

I'll be curious to see more technical information as it is released - more hardware/feature support in ESXi4, etc. Initially I don't see where Xenapp Server being freely available has made much impact, but the day is young.

Stumble Upon Toolbar

Wednesday, April 15, 2009

Exchange 2010 Beta Available

Michael over at sysadmin-network.com pointed out that Exchange 2010 (Exchange 14) Beta is now available.  I haven’t followed Exchange 14 development much beyond thinking it was going to be released around Q2 of 2010.  The website lists a generaly availability for Exchange 2010 being the second half of 2009. Being an Exchange shop, the guy who gets to plan and do the upgrades, I decided to do some reading.  You can find the Exchange 2010 Overview here but the points that really stuck out to me, in my small exchange environment, follow.

Up to 50% IOPs reduction in exchange databases from Exchange 2007.  If this is accurate and realistically attainable it alone could justify an upgrade.  Two significant things come from this:

1)  Support more users per IOPs of storage (Storage $$/More Users = good)

2)  Use less expensive storage while maintaining service levels (Lower Storage $$ = good)

Up to 16 replicated copies of each mailbox database.  If I’ve read this right, Exchange maintains multiple copies of each database allowing for quick failover.  So throw in a couple of JBOD underlying storage arrays instead of just one Raid 10/6/5 and you could have improved storage reliability for less cost if done properly.

Outlook Web Access improvements: Cross browser consistent experience (IE7+, Firefox 3+, Safari 3+), improved search (THANK YOU), and more.

Archiving and Retention add two features as well that previously were significant pains and/or required 3rd party solutions to accomplish well.  First off, the “Personal Archive”.  I think of this in terms of a server side PST (I’m not yet certain on how it is pulled off). This appears to be a long overdue, integrated solution to user PSTs floating around on local / network drives.  Searches can be done simultaneously against both Personal Archives and the “Inbox”.  So an organization can now potentially keep all email inside its Exchange environment and not need a 3rd party application to effectively archive / vault / retain messages.  Can Personal Archives be set as “check in but never check out”?  The second item which is a must if Personal Archives are to be a true archiving solution is Multi-Mailbox Search.  Authorized persons may perform searches across multiple mailboxes and personal archives.  Note that “Authorized Persons” can be people outside of Exchange administrators!

Needless to say I’m quite interested to find time to kick the tires on Exchange 2010.

In addition to this being news specifically about Exchange Server, a few conjectures can be made if things happen as they have in the past.  First Windows Server 7/2010 will be released near or at the same time (Hold off on those Server 2008 deployments!).  And secondly, Office 2010 will show up near or at the same time.

Stumble Upon Toolbar

Tuesday, April 7, 2009

OpenFire - GPL, Private 'Chat' server

There are lots of chat services out there so when thinking about utilizing chat one has to start by asking, why want or need to have one's own?  Chat has been around for a long time, and along with it are the eternal questions of privacy, security, etc.  Those questions are very motivating reasons to operate a chat / collaboration system internally.  Large and or geographically spread out organization can realize significant benefits by having chat / presence features available to users, while maintaining a secure (and compliant) environment.

While looking around I ran across OpenFire at igniterealtime.org.  Openfire is a XMPP(jabber) based, cross platform solution.  A few of the notable highlights: Simple administration, support for secure communications, logging, cluster options, multiple authentication options: internal, ldap, AD, and connectors to external chat systems.

Two of my favorite plugins are Fastpath and Fastpath web.  Fastpath allows for setting up chat queues - think of it like call centers for chat.  A "Workgroup" and list of user(s) is setup that service incoming chats in a round robin fashion.  Fastpath Web extends this outside of the chat client application and provides the mechanism where placing 2 lines of html code on a website allows for clientless chat - ie "Click Here for Live Support".  Fastpat Web is designed to be distributed - it can be installed across multiple (java)application servers to ensure redundancy and a balanced load if desired.

My hats off to igniterealtime/jive software.  Installation was simple(rpm, deb, msi, tgz,dmg) and configuration is handled through a well designed web tool.  We'll be investigating exactly how Openfire may be used at my employer.  Any other Openfire users out there with stories to share?

Stumble Upon Toolbar

Friday, March 27, 2009

Cobol Call for Help

Calling out to sys admins for some COBOL / NCR Unix help.  I inherited an older NCR WorldMark 4300 box at my current employer running a customzed COBOL application that was replaced by SAP before I began there.  The issue comes from the old regime that setup, programmed and maintained the system up and left / retired right after SAP and before my time.  Little / none of the legacy data was transfered into SAP (what a great idea!).  So I'm left with a technology and hardware dinosaur that houses some valuable historic information (both from an intellecutual and auditory standpoint).

I'm no COBOL guy.  There I said it.  The system is documented well enought that I have a copy of the datafiles, sources, and file descriptors, but in the event the old hardware lets the smoke out, I'm still left in a very uncomfortable situation.  I've been searching for COBOL database file convertors with some limited success and wanted to ping anyone out there for addition knowledge on the old environment / conversion tools (-> SQL Server) / migration paths.  

Stumble Upon Toolbar

Thursday, March 19, 2009

Online Backups

They've been around for several years and are one more means to and end in a sea of options - online backups.  The term Backup has many traits that must be defined by business in order to architetch and execute a 'Backup' successfully (RTO, RTP, Copy, Archive, CDP, Onsite / offsite, etc...).  I'm not looking to dig into those any more than necessary, but more so the concept of online backups and why or why not anyone out there has decided to use one.

First an assumption - few if any businesses have less than 50GB of data that needs be backed up over a given week. One small file server with its operating system and data can quickly tally up 50GB of space.  Space is cheap right - let's use it!  This data volume figure is important because it is going to determine how much an online backup service will cost.  Bigger data = bigger cost (and possibly the need for additional Internet bandwidth; even more cost).

My current employer used an online service when I began there.  Over a short period of time it became quite clear data was growing and wasn't going to stop to the point were it was very unecconomical to continue with the service.  Besides costs, all those terms that a business defines  backups as were used to architect a new solution.  I'm happy to say the current backup solution meets the business' "backup" needs and also had a very short ROI.

I was recently cold called by an online backup provider and entertained the ensuing web presentation, and it perked my curiosity.  The service costs when considering today's modern volumes of data are still somewhat...staggering.  These services also generally offer supreme convenience, no hardware, near zero human interaction during normal operation, and are accessable via the web.  

I've came up with a few situations where an online service fits:
1) The data is so critical that even having a redundant(!=backup) DC is not sufficient
2) A local "IT" resource is not present / feasable
3) Onsite backup vs Online is not cost effective (I believe these are few)

What do you think??



Stumble Upon Toolbar

Saturday, February 28, 2009

MS Exchange P2V

I've been prepping for this one for a long time. Anyone can vouch for the level of importance that email systems play in today's world. Several months back during planning for 2009, my Exchange 2K3 server came up as hitting its end of hardware warranty. While looking into extending it, I was not excited with the cost compared to going virtual(on ESXi).

My Exchange environment is simple - 1 server, less than 200 users, OWA, & OMA. While this is 'simple' it means everything rides on just one box. Sometime down the road it'll move toward a multi server, split role configuration.

Given the high importance of email and the relatively 'young' status of the physical-to-virtual process I had concerns using P2V on Exchange Server. After much searching and reading, largely through the VMware communities, I gained confidence that if done properly it would work well.

The Keys to Exchange P2V
  1. Testing - build a test Exchange Instance and utilize tools like JetStress
  2. Backups
  3. Consistent Exchange DB's
  4. Minimal amount of System activity during P2V
  5. Highly documented process/plan

The summary of my procedure was:
1) Full Backup
2) Cut off Net Access
3) Disable client related services
4) Incremental backup
5) Disable ALL non system critical services
6) Robocopy (local) data drive (Exchange databases) to iSCSI drive & disconnect
7) P2V the system drive then shutdown physical box
8) Bootup the VM, configure the VM network devices, and connect iSCSI drives
9) Startup services and check event logs after each service startup
10) Test "Internal" Clients
11) Open up external client access and Test
12) Full Production
13) Full Backup
14) Monitor, monitor, monitor

The entire process went smoothly and the new Exchange VM is functioning perfectly. Hopefully this is useful for those unsure about running Exchange Server in a virtual environment or considering an Exchange P2V.

Net positives of this operation include: Less power draw, longer UPS standby capacity, less heat generation, free'd up rack space, better performance, simplified DR, reduced maintenance costs, no client setting changes, etc...

Drawbacks - (Yawn) a late night of lost sleep to catch up on.

Side Note: OEM licenses are not transferable to different hardware - you must have a non-oem license for the new virtual system if the physical box was OEM licensed.

Some helpful resources I'd found:
Article 1
Article 2
Article 3(PDF)

Stumble Upon Toolbar

Tuesday, February 24, 2009

Citrix XenServer - Now Free

A Slashdot article yesterday afternoon points out that Citrix has made their XenServer product freely available. Big news for Xen fans or those still on the bubble on which virtualization platform to adopt. The XenServer product includes many of the advanced options that VMWare charges handsomely for in their VI3 product line (XenMotion, Templates, Centralized Management, etc).

As the Slashdot article also mentions this announcement comes on the crest of VMWare's flagship event - VMWorld. VMWare users should be biting their proverbial nails with anticipation of what VMWare's response to this will be (I know I am). Many have predicted the pending commoditization of hypervisors followed by their advanced features and I believe that time is here.

Stumble Upon Toolbar

Saturday, February 21, 2009

Article: How VMware Writes I/O to Disk

Mike D has a blog entry pointing over to a KB article laying out how the various VMWare products handle disk I/O (depends on the product and host OS!). If you've ever been curious / concerned(you should be) about I/O from Guest to Hypervisor (to host OS) to hardware and consistency in the event of a physical crash, it's a great read. VMWare product users on Linux / Mac hosts take note!

Stumble Upon Toolbar

Tuesday, February 17, 2009

New Poll: Virtualization Platform of Choice

It's been quite some time since I've put up a poll - so here's a new one. There are a plethora of solutions when it comes to selecting a virtualization platform. I (and others) are interested in hearing what your selection for use in the datacenter is(was) and why you chose that route or possibly changed solutions. VMware, one of the long time incumbents in the x86 virtualization arena, I'm guessing is going to be favored (for various reasons).

I'll cast my vote and thoughts. I'd used VMWare products of the workstation variety "many" years ago in order to run Windows and linux concurrently on the same laptop. The early iterations of their products I thought were OKfor desktop/workstation use but I couldn't see using them in the datacenter. Through a couple versions of the workstation product I observed significant improvements, then for a few years virtualization was off my radar for various reasons.

2007 planning begins and as I look at my replacing my aged equipment Virtualization comes back on the radar. The decision is made to 'crawl' into virtualization, in the datacenter, and use the VMware Server 1.x product. Into 2008 and VMware Server 2, virtualization is here to stay. 2009 - new storage, server capacity needs, and the availability of ESXi (free) have found me on the ESXi platform with an excellent commercial hardware and OSS software SAN. My environment is 'small' thus ESXi free is an excellent fit that allows for adding centralized managment and other advanced features when the need arises.

My long time history using VMware, seeing its progress, and the maturity of its current solutions made it my top selection. Other solutions are still in their infancy, some too far away from the metal, some lacking ease of management, corporate stability, cost, etc.

Stumble Upon Toolbar

Monday, February 16, 2009

NFS Performance Tips

Since I use NFS in my environment, I look around every now and then for tidbits on howto make it better perform. I ran across this article titled NFS for Clusters. I'm not running any "Clusters" that utilize NFS storage, but the concept of a well running NFS service applies everywhere. The article contains some key diagnostics to help determine where NFS may need some tweaking (and how to interpret those diagnostics).

Stumble Upon Toolbar

Tuesday, February 3, 2009

Windows Vmware Guest Time Sync KB

Accurate time is critical in computing environments - logging, compliance, audit-ability, etc. VMWare has an nice short KB article on configuring Windows guests systems time sync, including the case of Domain controllers. A good read considering the mischief an 'off' clock can cause.

Stumble Upon Toolbar

2009 SAN Project

I wrote earlier about investigations into a SAN solution for 2009 and then a bit later about the eventual decision. The summary of those two blogs is that numerous solutions exists and that commercial "boxed" solutions were not the right fit for my employer. My previous SAN was a linux based Thinkmate system loaded up with SATA drives and two 3ware controllers. It was great from a capacity standpoint, but despite any configuration adjustments, just could never provide adequate IO to support more than 5-6 VM's along with file serving - over iSCSI. It is also at the end of its warranty and Thinkmate does not offer warranty support past 3 years.

My new solution is as follows. Equipment is from Dell. A PE 2900 Racked as the storage controller. Yes it is a big box, but it offers 4 PCIe slots: (2) Perc 6E 512MB controllers, and (2) Dual port Nics. The box will run CentOS x64, NFS, and iSCSI. One MD1000 array shelf full of 15K SAS drives to host VM images, and another shelf of 15K SAS for various data volumes. VM's will be shared using NFS out to ESXi hosts. Use of NFS with VMWare is mature and will allow for backup of the VM's via the storage controller and some minor scripting - no special backup licenses, etc. VM's will run software iSCSI initiators when needing access to data LUNs sliced from the MD1000 "data" array. LAN, NFS, iSCSI are all separate networks, VLAN'd on 2 switches with each server having 2 connections to each network(one on each switch).

How does it work? Let's say I'm pleased with the results and completely comfortable with the decision versus a boxed all-in-one SAN solution. Link fail-over is beautiful, IOPs are fantastic, storage capacity is up.

Stumble Upon Toolbar

Tuesday, January 27, 2009

Crossroads - Linux TCP Load Balancer

I ran across Crossroads while reading an article on VDI (Virtual Desktop Infrastructure) and thought it'd be a good one to share. The quick summary is, Crossroads is a TCP software based load balancer. This works by directing clients to the Crossroads host system which then takes the request and passes it out to (one of many)backend systems in a manor that keeps the number of connections to each backend equal(balanced). Crossroads also keeps track of any backend that may be down so requests are not dolled out to the down system.

At first glance the balancing appears to be strictly based on number of TCP connections to each backend system (not response time based, or other advanced feedback), which may seem simplistic to what some hardware based balancers may offer. Simple can be good though(so is free - GPL)!

Stumble Upon Toolbar