Creating/converting a MNS 2008 Cluster with EMC RecoverPoint (part 2)

In my previous post I covered the considerations you’d want to make when adding a 3rd node to your existing shared quorum cluster at a new site. Now that you’ve made the decision and are using EMC RecoverPoint with Cluster Enabler (RP/CE) to manage the data replication and management of the disks and are converting your cluster to MNS, I’ve written up the steps to actually do this.

The EMC documentation is clear as mud on this. Literally you’ll go to the index where it says “Cluster Enabler install” and it’ll have step 1, then say “go to page 127”. You’ll go there and it’ll have step 2 and then will say “go back to page 76”… On and on. It’s actually so confusing that the consultant we had come from EMC to help answer our questions later called me and asked for my documentation so that he could use it at an installation at another client.

Please note that the below steps worked explicitly in my environment, but may need some changes to conform to specifics in your environment. Where noted there are different steps for 2003 and 2008 clusters. This assume that your SAN group has already replicated all the appropriate LUNs with RecoverPoint and that you’ve base-installed any new nodes.

1) Install Windows Installer 4.5 (if not already installed)

2) Install CE on all host nodes in the cluster (including the 3rd node that you’ve already base installed and have not yet added to the cluster).

  • Copy both the *base.msi and *plugin.msi to the same directory on your target machine (i.e. C:temp)
  • Run *base.msi, accept all the defaults. Reboot
  • Repeat for the existing nodes in the cluster, moving resources around as necesary. Note that at this point you’re only installing the files, you’re not actually enabling the cluster yet.

3) If your SAN group was nice enough to name the Consistency Group (replicated LUNs on the SAN. All the disks in the same Windows Cluster Group must be in the same Consistency Group on the SAN side) the same as your Cluster Group, then you’re fine. Otherwise you need to rename the Windows Cluster Group to match the name of the RP CG. All of the disks in the CG need to match the disks in the Windows Cluster Group. Renaming a Cluster Group doesn’t affect anything.

4) Have your SAN group ensure that your disks are replicating successfully and in sync.

5) Convert your cluster to MNS

  • Windows 2008: Right click on the cluster and go to More Actions —> Configure Cluster Quorum Settings. Check the box for “Node Majority”. Click Finish thru the wizard
  • Windows 2003: Right click on the “Cluster Group”, select New —> Resource and select the name as “MNS Resource”. Change the resource type to “Majority Node Set”. When done, bring the resource online. Right click on the root name of the cluster and select the Quorum tab. Select the “Quorum Resource” drop down box and change it to the “MNS Resource” you created.

6) Delete the old Quorum disk (Q:) from the cluster groups.

7) Assuming you have it, delete any Private networks from the cluster. You can’t use them anymore for cluster communications unless you’re extending 2 different subnets.

8) Have your SAN resource go into RP and enable image access on the 3rd node at the remote site.

9) Right-click the cluster and select Add Node. Add the server name and run through the validation wizard. You now have a 3 node MNS cluster.

10) Have your SAN resource go into RP and disable image access on the 3rd node. They also need to go into the RecoverPoint Management Applications and select the Consistency Group. In the Components pane, select the Policy tab. In the stretch Cluster Support area, check Use RecoverPoint/CE. Ensure that Group is managed by CE, Recoverpoint can only monitor is selected.

  • This step is very important! If you have trouble later it’s likely that your SAN resource did not do something in this step correctly.

11) On each node of the cluster go to All Programs —> EMC —> Cluster Enabler —> RecoverPoint Access Settings

  • Type in the IP of the RPA (you’ll get this from your SAN resource). There should be one on both sides of the WAN. Use your local one on each side.
  • The default userid/password is plugin/plugin. I suggest having the SAN guys change the default and tell you what the new account is.

12) In the same Start Menu group, go to EMC Cluster Enabler Manager

  • Click Configure CE Cluster
  • You should be able to accept the defaults on the rest of the wizard. If you get an error it’s likely because of step 10 or 11.

13) At this point you’re technically done. You’ve got a 3 node MNS cluster with RP/CE. You should be able to fail your cluster groups between the 3 nodes without any issues. If you can’t bring the disks up on any of the other nodes, check step 10. You HAVE to have CE manage the cluster. CE is what’s installed on your cluster nodes and you now have a new resource in the cluster that all your disks are dependent on.

But of course before you can truly fail over to the 3rd node you need to install your application onto the new node. I can’t tell you those steps since I don’t know your app, but it should be the same steps as when you did the 2nd node. Note that SQL installs vary by version on how you do the 3rd node install. Sometimes you have to slipstream Service Packs into your base SQL binaries and then just run setup. Older versions may require you to do a command line install with certain switches. Make sure you read documentation!

Creating/converting a MNS 2008 Cluster with EMC RecoverPoint (part 1)

I was supporting a handful of Windows 2008 (non-R2) 2 node clusters with shared quorum disks. Some had SQL 2008 installed and some were just a vendor application that we supported. For the purposes of this article it doesn’t really matter which so we’ll assume we’re talking about SQL 2008.

So the existing configuration was a 2 node Active/Passive SQL 2008 Cluster on Windows 2008 using shared EMC storage and a quorum (Q:) to hold the vote. They also had a private NIC (hard-wired crossover cable) and a public NIC on the 192.168.100.0/24 subnet. This is a high-availability (HA) environment.

The company purchased a new datacenter and for disaster recovery (DR) purposes wanted to extend the cluster down to the new datacenter. This would allow us to have a cluster with both HA and DR (i.e. able to recovery almost immediately and also to come up in case the datacenter disappeared).

There are several decision points when it comes to how you would extend your cluster to the new site:

1. Will you need to “stretch” your public VLAN down to the new site (i.e. have the same VLAN on both sides of the WAN) or will you be able to put the new cluster node on a new subnet.

  • 2008 supports having cluster nodes on different subnets, 2003 doesn’t. That’s your first answer. The second answer is that some applications (including SQL 2008) do NOT support clusters that are NOT on stretched subnets
  • The next answer is is your network person willing/able to stretch the subnet. Everywhere I’ve worked the first answer from the network team is a resounding NO, but eventually you can wear them down!

2. How will your replicate your date to the new site? Microsoft does not inherently replicate the data for you, the cluster just expects it to be there.

  • There are several solutions for this, but in my case we were a EMC shop so ended up using EMC RecoverPoint, which does block level copies on the SAN over the WAN. Note that whatever you use it has to be something can copy the data either asynchronously or synchronously. It just depends on how quickly you want your cluster up.
  • Also note that your cluster nodes at Site 1 (nodes 1 and 2) can STILL share their storage between them. The cluster nodes at the other sites will have their own copy of the storage (and can even share between multiple nodes there). That’s where your storage software (PowerPath, etc.) comes in handy.

3. How many nodes will you put in your new cluster and what quorum model will you choose?

  • This is a very contentious issue and everyone has their own opinion. As always, it depends.
  • If your data center (DC) model is that 1 DC is primary and the other is only for DR then you want  your primary DC to win the “vote” if the link between the 2 DC’s goes down. You don’t want there to be a voting storm or the 2ndary site to ever think he can win the vote.
  • As far as the vote concerns the primary machine needs to be able to win a majority vote. In a 2 node shared quorum model it takes 2 votes to win, thus each node has a vote and then the quorum has a vote. So whoever owns the quorum disk gets the vote.
  • In a 3 node majority node set (MNS) model, there is no shared quorum anymore so it still takes 2 votes to win. If you have 2 votes at DC1 and 1 vote at DC2, DC2 will never take primary on its own (altho you can certainly force it). If you lose the link your primary site should still be okay, which is what you’d want.
  • So if you went with a 4 node MNS cluster, 2 nodes at each site, you can see that if you lose the link you’d need 3 votes to be majority… and you’d NEVER get it. In that case the cluster resources would all go offline, since no one can get majority
  • If you went with a 5 node MNS, you’d still need 3 votes, but then you have the quandary of where to put the 5th node. You can put it at your primary site and be fine, but then you have to ask what adding the last 2 nodes really buys you (ignore the question of Active/Active clusters)
  • In the best world scenario you conceivably have a THIRD DC and you put the 4th or 5th node (or 12th for that matter) at the 3rd site and it has independent connections to both the other data centers. Then his vote always counts. But you still always have the problem of what happens if any/all of the datacenters become isolated and what you want to have happen when that happens.
  • Your other option, rather than stand up a whole 5th node to cast a vote, is you can use what’s called a file share witness (FSW) on a file server, which is simply a file share that has the ability to cast a vote. Other than that it can be treated the same as any other node.

4. Your next question is how you want Windows to manage who owns the disks in the cluster and who gets to make them active.

  • This is usually dictated by your replication software. You always have the option to do it manually (i.e. bring up the disks manually in a failover scenario). In our case we were using EMC RecoverPoint so used EMC Cluster Enabler to manage the disks from the OS side.

As you can see there are lots of decision points to make when you want DR and how to create/convert clusters when you want to add nodes and have full HA and DR. In my next post I’ll talk specifics on how to convert a 2 node shared quorum cluster to a 3 node MNS cluster with EMC RecoverPoint and ClusterEnabler for management.