File Share Woes

As sys admins, we all eventually hit the problem of inheriting file shares that were set up by years and years worth of SA’s who all felt their way was the best way to do it. I have a firm belief that any way you do it is fine, as long as you do it that way consistently. Eventually, you will find a file share where someone gave users Full Control to a file share or thru NTFS and those users modified the ownership of the files and took admins away. Then those people left and now you nor anyone else can access those files

Or, even worse, you got hit by a virus that stripped all the permissions away.

As with everything, there’s a lot of ways to fix this. You’ll first have to take ownership of the files and then reset the permissions back to default inheritance. You NEVER want to manage permissions on subfolders if you don’t have to.

The easy way is to use takeown and icacls:

takeown /f * /a /r
icacls * /inheritance:e /t

That works great, until it doesn’t. Takeown has some limitations which you’ll eventually run into. You’ll likely start getting random memory or caching errors. To fix, that use SubInAcl

Be very careful with subinacl. It’s a very powerful too and you could cause yourself a world of hurt with it.

subinacl /file "PATH" /setowner=Administrators <-- to claim ownership of the root
subinacl /subdirectories "PATH\*.*"  /setowner=Administrators <-- to claim ownership of everything else

icacls * /inheritance:e /t

That’s it!

Advertisements

Get a list of workstations in Active Directory

There’s really not a lot to this script, but with what I was trying to do I thought this was actually kind of a cool trick.

What I was trying to do was a get a list of all computers in AD with desktop operating systems. Yes, this is part of the migration series we’ve been doing because as it turns out random people have random other desktops sitting under their desk. 🙂

In addition to just pulling the computer objects I also wanted a list of the OS installed and the IP address they had registered in DNS. As you know, AD does not store the IP by itself, so since I wanted this all saved to just one array it required a little bit of trickery.

The first thing we want to do is the actual get-qdcomputer from AD, which is fairly straightforward. Then we want to pipe that to a where clause and filter on the operating system we want. Then save that to an array

$b=get-qadcomputer -includedproperties operatingsystem,lastlogontimestamp|where-object {$_.operatingsystem -notlike '*Server*' -and $_.operatingsystem -notlike $Null -and $_.operatingsystem -notlike '*ontap*'}

This next line is where the actual “trick” comes in. I was tickled pink by the ease of doing this after all the issues I was having with adding IP to the results above. The key was to set a new array (or the same one) and just select the attributes I want. In this case, name, operatingsystem, and lastlogontimestamp. I wanted the timestamp so that I could see the last time the machine had booted on the network. Then the real key is I also told it to select the attribute ipaddress. This attribute doesn’t actually exist in the above array, but because I’m selecting it here, it actually creates that row in the new array.

$b=$b|select name,operatingsystem,lastlogontimestamp,ipaddress

Then in the next section I’m executing a flushdns at the OS level and creating a new array. The new array just keeps track of the machines that we can’t ping. For the purposes of this script it’s really not used. So then we go thru each item in $b, reset some variables, then do a wmi call out to DNS to do an nslookup on the computer name. If it gets a results it adds that IP address back into the original array. Then we just output the array.

& ipconfig /flushdns
$NoPing=@()

foreach ($item in $b){
	$c=$item.name
	$a=$null
	$a=[System.Net.Dns]::GetHostAddresses($c)
	if (!$a){$NoPing+=$c}
	else {
		$item.ipaddress=$a
	}
}

And here it is all together. You can do whatever you want with the results.

$erroractionpreference="silentlycontinue"
$b=get-qadcomputer -includedproperties operatingsystem,lastlogontimestamp|where-object {$_.operatingsystem -notlike '*Server*' -and $_.operatingsystem -notlike $Null -and $_.operatingsystem -notlike '*ontap*'}

$b=$b|select name,operatingsystem,lastlogontimestamp,ipaddress

& ipconfig /flushdns
$NoPing=@()

foreach ($item in $b){
	$c=$item.name
	$a=$null
	$a=[System.Net.Dns]::GetHostAddresses($c)
	if (!$a){$NoPing+=$c}
	else {
		$item.ipaddress=$a
	}
}

$b|ft name, operatingsystem,lastlogontimestamp,ipaddress

Powershell: using datetime

So let’s assume that in previous code I’ve pulled a list of all users and groups and now want to run some code daily to get new users and groups. You don’t want to pull the whole list every day, as this would get a little unwieldy. But if you just want to know what in AD has changed in the past day or past week or whatever, this is a handy snippet of code that can get you there.

Datetime is always a little tricky to manipulate, but it’s very powerful if you want to do comparisons.


$dayEnd = [datetime]::Today
$dayStart = $dayEnd.AddDays(-1)
$results=@()
foreach ($item in $(get-distributiongroup -resultsize unlimited|where-object {$_.whencreated -ge $daystart -and $_.whencreated -lt $dayend})){$results+=$item.name}

  • $dayEnd = [datetime]::Today <== Here I’m just creating a new variable and want to set it to datetime. But specifying [datetime]::Today I’m telling it just give me monthy, day and year, so it would set the variable to 02262013 00:00:00, for example. Handy since I don’t want it to give a time of NOW.
  • $dayStart = $dayEnd.Adddays(-1) <== So take the previous variable and subtract 1 day from it (i.e. if $dayEnd was 02262013 00:00:00, $dayStart would be 02252013 00:00:00). If you wanted it to add 3 days to the starting variable you’d just do .Adddays(3). A minus sign is a negative here, obviously. So if you wanted the last week you’d do .Adddays (-7).  If you didn’t care about comparing the 2 dates you could also set the initial variable to $day=[datetime]::Today.AddDays(-7) to set $day to a week ago
  • $results=@() <== Create and set an array into a variable
  • get-distributiongroup -resultsize unlimited|where-object {$_.whencreated -ge $daystart -and $_.whencreated -lt $dayend} <== I tried doing a -filter, but that didn’t work correctly, so doing a  pipe into a where clause and key off of whencreated and greater than or equal to the $daystart and less than the $dayend.

Quicktip: Powershell script to enable all computers on a domain

This is a very specific script I had to write, so it probably won’t ever apply to you in any situation, but it had some cool stuff in it so I thought I’d post it.

For a DR exercise I had to write a script that would go through the entire AD domain and enable any computer object that was in the input CSV file. It’s a long story for why this was necessary, but suffice it to say that part of our deployment exercise would randomly disable some computer objects (mainly Windows 2003 servers).

Most of the script is error-checking and doing some handling around if it couldn’t find the computer object at all, and then if the object was already enabled we didn’t want it to touch it at all. The real meat is in the line:

 set-adcomputer $combobj.name -enabled $true

Essentially what this line does is read the array pulled in from the CSV and set it to enabled. It’s actually a super easy command that took me a while to find. But here it is for you!

The rest of the code:

import-module activedirectory
$erroractionpreference = "SilentlyContinue"
$computerobjects = import-csv c:\file.csv


foreach ($compobj in $computerobjects){
                $adcompobj = $null
                $adcompobj = get-adcomputer $compobj.name
                if ($adcompobj) {
                                if (!$adcompobj.enabled) {
                                                set-adcomputer $compobj.name -enabled $true
                                                write-host $compobj.name " was disabled. Set to enabled." -foregroundcolor yellow
                                }
                                else {
                                                write-host $compobj.name " was already enabled. Ignoring." -foregroundcolor green
                                }              
                }
                else {
                                write-host "There was an error connecting to " $compobj.name " or it doesn't exist on the domain." -foregroundcolor red
                }
}

Moving KMS servers

I was recently working with a client who was in the process of building a new AD forest and the slow and painful process of migrating all objects and collapsing old ones. That’s a pain for a different time.

When the new forest was first built out they weren’t sure how quickly they’d be migrating to the new domain or how quickly they’d be building out servers. The first part of the process was to build out a few DC’s and then go from there. Since KMS requires a minimum of 5 servers to activate any of them it seemed prudent to point them to the old KMS server in the old forest so that the Windows servers wouldn’t expire.

This was easily done by creating a SRV record in DNS in the new domain pointing to the old server. That’s not the point of this article, but as a down and dirty all you need to do is create a SRV record in the domain of a _VLMCS type as a _tcp record, pointing to port 1688. i.e. _VLMCS._tcp._domain.com. The data in the record is just the FQDN or IP of your KMS server.

All that being said, fast forward to the day that more than 5 servers existed in the new domain and they wanted to create a new KMS in the new domain and point the existing AD servers at it.

Creating the new KMS is easy. Just go to an elevated command prompt and type:

slmgr.vbs /ipk KEY 

where KEY is your purchased KMS key from Microsoft.

Then you’ll want to activate the KMS against Microsoft:

slmgr.vbs /ato 

You should get a nifty popup box that says Activation is successful.

And finally you’ll want to go into DNS and delete the manually created SRV record pointing back to your old KMS host. You should also see one in there for your new KMS.

Now that you have a brand new KMS host, you need to go back and re-point your new AD serves at it. Should be as simple as doing a /ato again, right? Wrong!

Since the old KMS server is still up and accessible over the network your AD servers will automatically use it to renew their license, and will continue to do so every 180 days that they renew. You need to teach them how to forget that it exists.

Type the following commands in sequence. Wait for a popup after each one telling you that it was successful:

slmgr.vbs /ckhc <== disables caching of the KMS host
slmgr.vbs /ckms <== clears the KMS host name cache
ipconfig /flushdns <== reloads the IP cache
net stop sppsvc && net start sppsvc <== restarts the Software Protection Service, which is necessary for any of these changes to take effect
slmgr.vbs /ato <== attempt to activate the KMS client

Note that until you've hit 5 servers you should get an activation error to that effect. Once you hit 5 it'll go away.

slmgr.vbs /skhc <== re-enable KMS host caching and take us back to default settings
slmgr.vbs /dlv <== give us a verbose list of all the KMS licensing settings. Should show that we're activated and who we're activating against

Annoyingly, you won’t get a successful activation until you’ve hit your 6th server, but all the activations you’ve attempted will automatically work once you get there. If you want to force it you can go back and do an slmgr.vbs /ato again.

All these commands are possibly overkill, but it was the only way I was consistently able to get all the servers to activate against the new KMS.

I’m not a Project Manager

BUT!

I went to school at Purdue and we did have to take a series of classes on the Systems Development Life Cycle. Back then it was just called SDLC  and was an ongoing process of Analyze, Design, Implement, Test, and Evaluation. Then Evaluation could cycle back into Analysis. It’s a logical thought process for how to develop/deploy new systems or projects:

Purdue was good in that they taught us how to build systems, how to write the code, how to support the hardware, how to do the network, etc, etc. It was a very well rounded education and in your last year or so you picked your direction and went with it. I picked Systems Administration, and this was before learning how to use Windows was a matter of course. (I was the last year who had to learn COBOL for the programming language.) But they did give us a pretty good rundown of how to run projects and how to build things. Being logic-minded this came very natural to me and I’ve used it pretty extensively over the 14 years since.

I’ve run major projects at large companies doing implementations and migrations from existing technology, mainly through tools like MS Project and Visio. Everyone has their own way they use SDLC and one of the ways I’ve always used it (and seen it used) is to add a last step to go back and talk to your stakeholders periodically and go “Hey, you told us you wanted full redundancy for your mail servers. We’re doing X, Y, and Z and here’s how it does it. Does that meet the needs you were asking for?”.

NOT going back to the person who requested the project is a great way to ensure failure.

That being said, sometimes you have hard and fast requirements and your stakeholders truly don’t understand what they’re asking for. For example, one of my prior companies went through a rebranding process where we renamed the company and they wanted “everything” with the old name to be renamed. Of course, our internal AD domain was xyz.prv. You CAN technically rename a Windows domain, but who have you ever run into who thought that was a good idea?

We took the opportunity to go through a major re-architecture of the entire environment (globally), built out a whole new AD environment on what was then new technology (2008), putting everyone into a new global Exchange environment (2010), and move all servers/workstations into the domain. In other words, completely changed scope and turned what marketing people thought would be a 30 second change into a multi-year project. But it needed to be done and after presenting our findings we were able to get management to agree.

That’s where MS Project comes in and you start tasking out everything that needs to be done: ensure no duplicate names exist, come up with new AD design, new Exchange design, migration plan, concurrent running plan, etc. etc. Or “scope-creep” as we like to say 🙂

I couldn’t have run this project without knowing the SDLC and how to manage a project. I later learned that my process of Project Management was actually called Waterfall these days and it’s what most people are doing.

 

I guess the point of this article is to introduce you to SDLC/Waterfall and Project Management and that I think Sys Admins need to wear the many hats. Sure you can go on forever without knowing how to run a project, but if you expect to move up the tree of life and get promoted and expect to run things, you should probably know how to do things that are not traditionally part of your job. Another great example of this is scripting. You can say you’re not a programmer, but if you don’t know how to script or write Powershell, you’re probably hurting your career.

Next week, I’ll start talking about Agile and its use in Project Management. I’m not a fan, so it won’t be pretty.

Windows 8 & Multi-Tenancy AD

So apparently all the new hyped-up multi-tenant features of Windows 8 that Microsoft and all the blogs have been slathering over is all related to Hyper-V and how you can use it to do multi-tenancy (MT). Their version of MT is literally only talking about how you can have multiple VM’s running in Hyper-V and how they can be completely separated on the same box (good article here ). Apparently it does this by vlan tagging – which I didn’t realize wasn’t already an option in Hyper-V. We’ve been doing this in ESX for years the exact same way.

What this means is that AD is still normal Windows Active Directory. Sure there are some nice, new, bells and whistles, and a whole new, clunky interface, but underneath it’s still the same AD it always was. This makes me sad in this day of “Cloud” and everyone having to have their own public and private cloud, the buzz words like IaaS, SaaS, etc. Still no multi-tenant AD. And for most companies and customers out there who don’t want to build a whole new AD to support a half dozen servers, we’re still back to the old ways of setting custom AD permissions on OU’s and objects. Mark me sad.

To sum up: multi-tenancy in Win 8 is just Hyper-V with vlan-tagging. Stuff we’ve been able to do for years, anyway.

Back to the drawing board for me!