SQL 2012 Launch

First off, can I say how stoked I was with some of the new features of SQL 2012? I don’t usually geek out this much over software stuff, but I have to say there were several WOW factors on the launch yesterday.

The bad to outweigh the good is that the Launch Event itself was completely terrible. I and a thousand others on twitter spend a good 30 minutes just waiting for the login screen to come up. When it finally did, it still wouldn’t let you login. Finally, about an hour after the scheduled start time, everything was up and running. Epic fail on Microsoft’s part.

The other bad was that there was nothing live. It was all pre-recorded videos on various SQL topics. They were good videos, but nothing is better than a real launch event where you see it all on the big screen and get a big grab-bag full of goodies. I attended the Windows 7 launch a few years back at the Denver Convention Center and it was pretty awesome.

So here’s a link to the launch event. The videos themselves are supposed to be available for 90 days and it’s free to register, so give them a whirl:

http://www.sqlserverlaunch.com/WW/Login

What new features did I like?

Resource Governor for multi-tenancy

  • Have multiple customer databases on your SQL server and want to meter them? The resource pools allow you to set minimum and maximum CPU and memory limits. You can allow bursts and if you bill customers via CPU then with the maximum so they don’t get charged extra.
  • Also, much better billing options. Your stats on the server now match exactly what your billing your customers and it really is what they’ve used. How many times do you go to pull CPU/memory stats and have them be completely indecipherable with actual customer usage?

AlwaysOn Availability Groups

  • I’m a Windows Clustering guy, but I’m also a fan of SQL Replication/Mirroring and the options that gives you. AOA Groups take this to a whole new level.
  • The demo I saw still had the base Windows servers in a cluster (a couple local, 1 as a standby, then another at a different site), and with the AlwaysOn you can specify for each database which server was primary for it, which was an automatic failover server, which was a manual failover server, and if you wanted the data to be copied synchronously or asynchronously. I immediately saw HUGE benefits for this for a job at a prior company where we had geo-clusters spread across different locations.
  • The big caveat with AOA Groups is that your end-user application still has to be aware of the nodes and it’s DSN has to be cognizant of all the options.
  • One other option they mentioned but didn’t go into was that you can add a Listener to the configuration and it’s basically the same as the old Virtual IP/Network Name and the end-user application doesn’t necessarily need to be aware of everything that’s going on.
  • You can also specify any of your secondaries as a read-only copy of the data, and can use it as a backup point or for reports or anything like that. This is a brand-new feature as previously you couldn’t touch the copies of Mirrored databases.
  • You also have what they call a “flexible failover policy” where you can specify what parameters can initiate an automatic failover. Again, I see huge benefits for builds I’ve done at prior clients where sometimes the network would hiccup and the DB would failover and no one knew until it was slow.

That’s really only 2 major changes to SQL 2012, but it’s enough to make me download it and start playing!

SQL Mirroring with SQL 2008, part 2

So in my previous post I went through the reasons why you might want to use SQL mirroring and how you could possibly sell the added expense to your management. In this post I’m going to cover the actual SQL commands for doing so. There is a wizard you can run through to actually set this up (right-click your database, go to Properties, click on Mirroring on the left side, run the “Configure Security” wizard, but what’s the fun in that? Plus I’ve had issues with the wizard not quite setting things up right before.

Pre-req’s: for the purpose of this article let’s assume that you have 3 servers (physical or VM, doesn’t matter), with Windows 2008 R2 installed, joined to your AD domain. You also have SQL 2008R2 installed, latest service pack. All instances of SQL are running the default instance and running under the same Windows AD service account.

Let’s assume your servers are called SQLRepl01, SQLRepl02, and SQLRepl03. We’ll use SQLRepl01 as the Principal instance, SQLRepl02 as the Mirror, and SQLRepl03 as the Witness. You can actually use any of the 3 for any of these roles and for your next database you could change it completely.

For my test I created a database called ReplicationTest and populated it with data using this wonderful article: http://www.mitchelsellers.com/blogs/articletype/articleview/articleid/249/creating-random-sql-server-test-data.aspx

Okay, so now that all that’s done, let’s actually get started.

    1. First we need to create an EndPoint for each SQL server so that they can listen to the traffic for the Mirroring. This needs to be run on all 3 SQL instances. 5022 is the default port used for this, but you can specify any unused port you want, just make sure that the servers can talk to each other on this port (i.e. opened in the Windows Firewall or any other firewalls that exist):
CREATE ENDPOINT Mirroring
    STATE=STARTED
    AS TCP (LISTENER_PORT=5022)
    FOR DATABASE_MIRRORING (ROLE=ALL)
GO
    1. Okay, so now that your endpoints are all created we need to do a backup of the existing database and then restore it on your mirror instance. This creates, in essence, a “seed” database. Run the following backup commands on SQLRepl01
USE MASTER
GO
BACKUP DATABASE ReplicationTest TO DISK='C:TEST.BAK'
GO
BACKUP LOG ReplicationTest TO DISK='C:TEST.LOG'
GO
    1. Browse to C: on SQLRepl01 and copy test.bak and test.log to \SQLRepl02c$. Then login to SQLRepl02 and run the following commands. Note that you have to restore with NORECOVERY to leave the database in a state that allows us to replicate. If you don’t then you’ll need to restore the database again. This ensure that the 2 databases are a mirror of each other.
RESTORE DATABASE ReplicationTest FROM DISK='C:TEST.BAK' WITH NORECOVERY
GO
RESTORE LOG ReplicationTest FROM DISK='C:TEST.LOG' WITH NORECOVERY
GO
    1. This is what the restored database will look like:

    1. Okay, now that the databases are there we can actually set up the mirror. IMPORTANT: run this command on the mirror instance (i.e. SQLRepl02). If you changed the port away from the default of 5022, change it in the code below.
ALTER DATABASE ReplicationTest
SET PARTNER= 'tcp://SQLRepl01:5022'
GO
    1. Run the following commands on the Principal server (i.e. SQLRepl01). The first one tells it that the second server is the Mirror and the 2nd command tells it that the 3rd server is the witness.
ALTER DATABASE ReplicationTest
SET PARTNER='tcp://SQLRepl02:5022'
GO

ALTER DATABASE ReplicationTest
SET WITNESS='tcp://SQLRepl03:5022'
GO

At this point your Mirroring set up is complete. This is what it will look like on the Principal and Mirror:

You can test it by shutting down the primary server and making sure that your Principal automatically fails over. Or you can add data to the principal and take a snapshot of the secondary to ensure the data is getting there (because you can’t access the mirror directly this is the only way to query the data). Note that even though mirroring works with SQL standard, snapshots don’t so if you’re using standard the only way to view the data would be to failover:

CREATE DATABASE Test_snapshot
ON (NAME = 'ReplicationTest', FILENAME = 'C:BackupsTest_Snapshot.SNP')
   AS SNAPSHOT OF ReplicationTest
GO
Select * from Test_snapshot
GO

One of the true values of this is that you can set up 3 databases (or as many as you want) and have each one be Principal on any of the 3 servers and their Mirror and Witness on any of the other 2. Note that if you use the Wizard to create all of this it won’t set up the Endpoint on the Witness instance correctly. Since we created the endpoint with the code ROLE=ALL we can use each of the servers for any of the roles we want.

And that’s it! Let me know if you have any questions.

SQL Mirroring with SQL 2008, part 1

So you’ve made the decision that you want to use SQL Mirroring for your DR solution. You have several options for DR for SQL databases: you can use Windows clustering, log shipping, or mirroring. My preference in any scenario is to just use Windows Clustering, but there are reasons why you could go with another solution. Clustering is nice because it’s outside of SQL and just sits at the OS-level, but by definition you are going to have some hardware sitting there idle even if you go Active/Active (assuming you planned for 1 server going down and allocated your CPU/RAM appropriately. Plus doing it at the OS level doesn’t require you to be an expert or have a fair bit of knowledge of SQL. Using mirroring also allows you to be fully HA as you can have both servers sitting in different sites on different subnets and not have to worry about having your network guys stretch a subnet geographically, which they never want to do.

Note that SQL 2008 (and all versions, for that matter) cannot be used on a Windows 2008 geo-cluster that is on different subnets.

And in case you weren’t aware: HA = Highly Available, DR = Disaster Recovery. Separate, but not necessarily mutually exclusive technologies.

All that being said, let me give you a quick rundown on SQL Mirroring. I can’t say it better than Microsoft, with their gaggle of technical writers, so here’s an article from them on the benefits of Database Mirroring:

http://msdn.microsoft.com/en-us/library/ms189852.aspx

A few items of note that I always like to tell my clients:

  • You need a minimum of 3 instances of SQL for full redundancy on Mirroring (Principal, Mirror, Witness). You can technically do it in 2, but you lose the capability for automatic failover without the witness. Think of it as the 3rd vote if this were a cluster.
  • All SQL servers need to be a member of an AD domain and running under the same service account
  • There is a copy of the database on both the Principal and the Mirror.
  • The Witness allows for automatic failover in the event of a server failure
  • Mirroring is on a per-database level, not per-server level. A SQL server can be a mixture of single databases plus mirror instances.
  • The database can be set in synchronous or asynchronous mode
    • Under asynchronous operation, the transactions commit without waiting for the mirror server to write the log to disk, which maximizes performance
    • Under synchronous operation, a transaction is committed on both partners, but at the cost of increased transaction latency

There are also a few selling points for Mirroring, if you’re having trouble selling having all these servers to your management or to your clients.

  • Since mirroring requires 3 servers be active at all times, you can run databases on all 3 servers, some mirrored and some not. You can also have any combination of Principal/Mirror/Witness running across the 3 servers (i.e. Active/Active/Active)
  • Mirroring can be done across multiple sites and subnets, thus allowing for full HA/DR

In my next article I’ll get into the nitty-gritty of how to actually set this up and show you all the SQL commands for this.

Creating/converting a MNS 2008 Cluster with EMC RecoverPoint (part 2)

In my previous post I covered the considerations you’d want to make when adding a 3rd node to your existing shared quorum cluster at a new site. Now that you’ve made the decision and are using EMC RecoverPoint with Cluster Enabler (RP/CE) to manage the data replication and management of the disks and are converting your cluster to MNS, I’ve written up the steps to actually do this.

The EMC documentation is clear as mud on this. Literally you’ll go to the index where it says “Cluster Enabler install” and it’ll have step 1, then say “go to page 127”. You’ll go there and it’ll have step 2 and then will say “go back to page 76”… On and on. It’s actually so confusing that the consultant we had come from EMC to help answer our questions later called me and asked for my documentation so that he could use it at an installation at another client.

Please note that the below steps worked explicitly in my environment, but may need some changes to conform to specifics in your environment. Where noted there are different steps for 2003 and 2008 clusters. This assume that your SAN group has already replicated all the appropriate LUNs with RecoverPoint and that you’ve base-installed any new nodes.

1) Install Windows Installer 4.5 (if not already installed)

2) Install CE on all host nodes in the cluster (including the 3rd node that you’ve already base installed and have not yet added to the cluster).

  • Copy both the *base.msi and *plugin.msi to the same directory on your target machine (i.e. C:temp)
  • Run *base.msi, accept all the defaults. Reboot
  • Repeat for the existing nodes in the cluster, moving resources around as necesary. Note that at this point you’re only installing the files, you’re not actually enabling the cluster yet.

3) If your SAN group was nice enough to name the Consistency Group (replicated LUNs on the SAN. All the disks in the same Windows Cluster Group must be in the same Consistency Group on the SAN side) the same as your Cluster Group, then you’re fine. Otherwise you need to rename the Windows Cluster Group to match the name of the RP CG. All of the disks in the CG need to match the disks in the Windows Cluster Group. Renaming a Cluster Group doesn’t affect anything.

4) Have your SAN group ensure that your disks are replicating successfully and in sync.

5) Convert your cluster to MNS

  • Windows 2008: Right click on the cluster and go to More Actions —> Configure Cluster Quorum Settings. Check the box for “Node Majority”. Click Finish thru the wizard
  • Windows 2003: Right click on the “Cluster Group”, select New —> Resource and select the name as “MNS Resource”. Change the resource type to “Majority Node Set”. When done, bring the resource online. Right click on the root name of the cluster and select the Quorum tab. Select the “Quorum Resource” drop down box and change it to the “MNS Resource” you created.

6) Delete the old Quorum disk (Q:) from the cluster groups.

7) Assuming you have it, delete any Private networks from the cluster. You can’t use them anymore for cluster communications unless you’re extending 2 different subnets.

8) Have your SAN resource go into RP and enable image access on the 3rd node at the remote site.

9) Right-click the cluster and select Add Node. Add the server name and run through the validation wizard. You now have a 3 node MNS cluster.

10) Have your SAN resource go into RP and disable image access on the 3rd node. They also need to go into the RecoverPoint Management Applications and select the Consistency Group. In the Components pane, select the Policy tab. In the stretch Cluster Support area, check Use RecoverPoint/CE. Ensure that Group is managed by CE, Recoverpoint can only monitor is selected.

  • This step is very important! If you have trouble later it’s likely that your SAN resource did not do something in this step correctly.

11) On each node of the cluster go to All Programs —> EMC —> Cluster Enabler —> RecoverPoint Access Settings

  • Type in the IP of the RPA (you’ll get this from your SAN resource). There should be one on both sides of the WAN. Use your local one on each side.
  • The default userid/password is plugin/plugin. I suggest having the SAN guys change the default and tell you what the new account is.

12) In the same Start Menu group, go to EMC Cluster Enabler Manager

  • Click Configure CE Cluster
  • You should be able to accept the defaults on the rest of the wizard. If you get an error it’s likely because of step 10 or 11.

13) At this point you’re technically done. You’ve got a 3 node MNS cluster with RP/CE. You should be able to fail your cluster groups between the 3 nodes without any issues. If you can’t bring the disks up on any of the other nodes, check step 10. You HAVE to have CE manage the cluster. CE is what’s installed on your cluster nodes and you now have a new resource in the cluster that all your disks are dependent on.

But of course before you can truly fail over to the 3rd node you need to install your application onto the new node. I can’t tell you those steps since I don’t know your app, but it should be the same steps as when you did the 2nd node. Note that SQL installs vary by version on how you do the 3rd node install. Sometimes you have to slipstream Service Packs into your base SQL binaries and then just run setup. Older versions may require you to do a command line install with certain switches. Make sure you read documentation!

Creating/converting a MNS 2008 Cluster with EMC RecoverPoint (part 1)

I was supporting a handful of Windows 2008 (non-R2) 2 node clusters with shared quorum disks. Some had SQL 2008 installed and some were just a vendor application that we supported. For the purposes of this article it doesn’t really matter which so we’ll assume we’re talking about SQL 2008.

So the existing configuration was a 2 node Active/Passive SQL 2008 Cluster on Windows 2008 using shared EMC storage and a quorum (Q:) to hold the vote. They also had a private NIC (hard-wired crossover cable) and a public NIC on the 192.168.100.0/24 subnet. This is a high-availability (HA) environment.

The company purchased a new datacenter and for disaster recovery (DR) purposes wanted to extend the cluster down to the new datacenter. This would allow us to have a cluster with both HA and DR (i.e. able to recovery almost immediately and also to come up in case the datacenter disappeared).

There are several decision points when it comes to how you would extend your cluster to the new site:

1. Will you need to “stretch” your public VLAN down to the new site (i.e. have the same VLAN on both sides of the WAN) or will you be able to put the new cluster node on a new subnet.

  • 2008 supports having cluster nodes on different subnets, 2003 doesn’t. That’s your first answer. The second answer is that some applications (including SQL 2008) do NOT support clusters that are NOT on stretched subnets
  • The next answer is is your network person willing/able to stretch the subnet. Everywhere I’ve worked the first answer from the network team is a resounding NO, but eventually you can wear them down!

2. How will your replicate your date to the new site? Microsoft does not inherently replicate the data for you, the cluster just expects it to be there.

  • There are several solutions for this, but in my case we were a EMC shop so ended up using EMC RecoverPoint, which does block level copies on the SAN over the WAN. Note that whatever you use it has to be something can copy the data either asynchronously or synchronously. It just depends on how quickly you want your cluster up.
  • Also note that your cluster nodes at Site 1 (nodes 1 and 2) can STILL share their storage between them. The cluster nodes at the other sites will have their own copy of the storage (and can even share between multiple nodes there). That’s where your storage software (PowerPath, etc.) comes in handy.

3. How many nodes will you put in your new cluster and what quorum model will you choose?

  • This is a very contentious issue and everyone has their own opinion. As always, it depends.
  • If your data center (DC) model is that 1 DC is primary and the other is only for DR then you want  your primary DC to win the “vote” if the link between the 2 DC’s goes down. You don’t want there to be a voting storm or the 2ndary site to ever think he can win the vote.
  • As far as the vote concerns the primary machine needs to be able to win a majority vote. In a 2 node shared quorum model it takes 2 votes to win, thus each node has a vote and then the quorum has a vote. So whoever owns the quorum disk gets the vote.
  • In a 3 node majority node set (MNS) model, there is no shared quorum anymore so it still takes 2 votes to win. If you have 2 votes at DC1 and 1 vote at DC2, DC2 will never take primary on its own (altho you can certainly force it). If you lose the link your primary site should still be okay, which is what you’d want.
  • So if you went with a 4 node MNS cluster, 2 nodes at each site, you can see that if you lose the link you’d need 3 votes to be majority… and you’d NEVER get it. In that case the cluster resources would all go offline, since no one can get majority
  • If you went with a 5 node MNS, you’d still need 3 votes, but then you have the quandary of where to put the 5th node. You can put it at your primary site and be fine, but then you have to ask what adding the last 2 nodes really buys you (ignore the question of Active/Active clusters)
  • In the best world scenario you conceivably have a THIRD DC and you put the 4th or 5th node (or 12th for that matter) at the 3rd site and it has independent connections to both the other data centers. Then his vote always counts. But you still always have the problem of what happens if any/all of the datacenters become isolated and what you want to have happen when that happens.
  • Your other option, rather than stand up a whole 5th node to cast a vote, is you can use what’s called a file share witness (FSW) on a file server, which is simply a file share that has the ability to cast a vote. Other than that it can be treated the same as any other node.

4. Your next question is how you want Windows to manage who owns the disks in the cluster and who gets to make them active.

  • This is usually dictated by your replication software. You always have the option to do it manually (i.e. bring up the disks manually in a failover scenario). In our case we were using EMC RecoverPoint so used EMC Cluster Enabler to manage the disks from the OS side.

As you can see there are lots of decision points to make when you want DR and how to create/convert clusters when you want to add nodes and have full HA and DR. In my next post I’ll talk specifics on how to convert a 2 node shared quorum cluster to a 3 node MNS cluster with EMC RecoverPoint and ClusterEnabler for management.