The winRM client received an HTTP server error status 500

Background

When attempting to add the Windows Backup server role to a new Windows 2012 R2 domain controller, it failed with a WinRM error.

Trying to run winrm quickconfig from the command line resulted in the error: ” The WinRM client received an HTTP server error status 500″.

Looking in the event log shows nothing special. Trying to restart the WinRM service (Windows Remote Manager) in the Services console achieves nothing.

If you read a bunch of forum posts about it, the error often seems to relate to Exchange 2010, and the fix is to (re)install the WinRM IIS Extension.

However, my DCs do not run Exchange or IIS, so this was a bit different.

Fix

In this instance, the only fix was to unconfigure and reconfigure the WinRM instance using sconfig.

In a command prompt, or PowerShell,  run sconfig. The following console opens.

Note that that option 4, Configure Remote Management, is showing Unknown.

In the console, type 4 and hit Enter.

==================================================================
                         Server Configuration
==================================================================
1) Domain/Workgroup:                    Domain:  example.com
2) Computer Name:                       DC1
3) Add Local Administrator
4) Configure Remote Management          Unknown
5) Windows Update Settings:             Manual
6) Download and Install Updates
7) Remote Desktop:                      Enabled (more secure clients only)

8) Network Settings
9) Date and Time
10) Help improve the product with CEIP  Not participating
11) Windows Activation

12) Log Off User
13) Restart Server
14) Shut Down Server
15) Exit to Command Line

Enter number to select an option: 4

Note that the next screen shows that Remote Management is Enabled. Enter 2 to disable Remote Management.

--------------------------------
  Configure Remote Management
--------------------------------

Current Status: Remote Management is enabled

1) Enable Remote Management
2) Disable Remote Management
3) Configure Server Response to Ping

4) Return to main menu

Enter selection: 2

Disabling Remote Management...

The next screen shows that the Remote management is disabled. Enter 1 to re-enable it.

--------------------------------
  Configure Remote Management
--------------------------------

Current Status: Remote Management is disabled

1) Enable Remote Management
2) Disable Remote Management
3) Configure Server Response to Ping

4) Return to main menu

Enter selection: 1

Enabling Remote Management...

The next screen shows Remote Management is enabled again. Enter 4 to return to the main SConfig menu.

--------------------------------
  Configure Remote Management
--------------------------------

Current Status: Remote Management is enabled

1) Enable Remote Management
2) Disable Remote Management
3) Configure Server Response to Ping

4) Return to main menu

Enter selection: 4

From there, you can either close the command window, or enter 15 to exit SConfig.

In order to double-check the config, you can run winrm quickconfig.

C:\>winrm quickconfig
WinRM service is already running on this machine.
WinRM is already set up for remote management on this computer.
Advertisements
Posted in Uncategorized | Leave a comment

Fixing NTFS Folder ACLs with Powershell

Background

I encountered a problem on a file server when trying to add a new permission to a group that needed to be applied to all our user home directory folders. The parent folder held the “default” permissions that should apply to all the child home directories, but for some reason, the majority of child directories were set to “not inherit” permissions. Simply applying the new ACL to the parent folder would not propagate down to each child.

At some point, the permissions had been changed to explicit ones on each home directory. Sure, I could script an ACL to apply to each directory, again, but I actually wanted to fix it so that inheritance worked for the default permissions we wanted to apply to each home dir, while ensuring the ACL that allowed the user to access their directory (the Modify ACL shown at the top in the image below) remained untouched.

As you can see in the image, all permissions are “not inherited”, although the bottom three ACLs actually apply in the parent directory. The Include inheritable permissions from this object’s parent setting is not ticked.
faulty perms

However, simply ticking the “Include inheritable permissions..” does not actually help entirely. See below! Sure, the parent ACLs (including the new one I just added) are now inherited from the parent directory. But three of them are duplicates of the explicit permissions that had been previously set! (You can’t see the user Modify permission, since it’s scrolled off the top of the window).
fixed inheritance

This is actually what we want to achieve, below. Other than the user’s Modify ACL (not inherited), all the other perms should be inherited from the parent, with no duplicates.
fully fixed perms

Syntax

This script (see final section for the full script) does the following things:

  1. Enumerates all the folders beneath the parent directory. [Lines 1-4]. These have the same names as the actual user accounts. (This is important! )
  2. Goes through each folder and checks to see if the folder’s permissions are already set to inherit – about 10% or so on my file server were correct – the more recent folders. If the Inherit flag is already set, process the next folder.
  3. If the folder is not set to inherit ACLs, set the Inherit flag at the folder level only (all subfolders and files are correctly set to inherit from their parent already – I checked).
  4. Once the flag is set, clean up the duplicate explicit ACLs, leaving only the Modify permission for the actual user account that the folder belongs to in place.

Lines 1-4 do the folder enumeration – I’m simply selecting the folders without any recursive behaviour.

In line 5, I simply set the user name to the same as folder name. If you need some other logic to find the user name, that’s where it should go.

Line 7 gets the folder ACLs and stores them in $acl. The “bad” ACLs appear as below. areaccessrulesprotected for the folder is set to True (no inheritance). All of the user ACLs have isInherited set to False.

PS> $acl = get-acl "F:\Users\SMITH"

PS> ($acl.areaccessrulesprotected)
True

PS> $acl.access

FileSystemRights  : FullControl
AccessControlType : Allow
IdentityReference : BUILTIN\Administrators
IsInherited       : False
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

FileSystemRights  : Modify, Synchronize
AccessControlType : Allow
IdentityReference : DOMAIN\usershare_rw
IsInherited       : False
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

FileSystemRights  : Modify, Synchronize
AccessControlType : Allow
IdentityReference : DOMAIN\SMITH
IsInherited       : False
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

FileSystemRights  : ReadAndExecute, Synchronize
AccessControlType : Allow
IdentityReference : DOMAIN\special_reads
IsInherited       : False
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

So in line 9, we check to see if the areaaccessrulesproperty is set to True.

If the property is True, then we construct the replacement ACL for False (you need both $isProtected set to False and $preserveInheritance set to True for it to work). Then we actually set the replacement ACL in Line 18. This has the effect of “ticking” the Allow inheritance box in the second picture above.

To clean up the pesky duplicate ACLs, I’ve nested that process inside another loop. As you can see above, each ACL has its own set of properties, so we need to loop through the whole set to make sure we capture each one to remove it if it’s not wanted, or keep the one that belongs to the actual user account DOMAIN\SMITH. Here I’m doing a simple match of the bare user account name (remember, same as the folder name) because I can’t be bothered constructing “Domain+username” – this kind of match is good enough here.

If the ACL doesn’t match the username, then I remove it from the $acl container.

Foreach($value in $access.identityReference.Value)  {
    #skip the user permission and remove non-inherited ACLs
    if ($value -notmatch $user) {
        #construct the new ACL
        $acl.RemoveAccessRule($access) |out-null
}

In Line 37, we set the actual NTFS ACL to the “cleaned” version of $acl.

At that point, the folder ACLs are set correctly and will show up as below. All the ACLs except the user’s have “IsInherited” set to True. Also, the new ACL – the one that I had set on the parent folder that hadn’t propagated – is now visible on the folder.

PS> $acl = get-acl "F:\Users\JONES"
 
PS> $acl.access
 
FileSystemRights  : Modify, Synchronize
AccessControlType : Allow
IdentityReference : DOMAIN\JONES
IsInherited       : False
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None
 
FileSystemRights  : FullControl
AccessControlType : Allow
IdentityReference : DOMAIN\usershare_full
IsInherited       : True
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None
 
FileSystemRights  : ReadAndExecute, Synchronize
AccessControlType : Allow
IdentityReference : DOMAIN\special_reads
IsInherited       : True
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None
 
FileSystemRights  : Modify, Synchronize
AccessControlType : Allow
IdentityReference : DOMAIN\usershare_rw
IsInherited       : True
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None
 
FileSystemRights  : FullControl
AccessControlType : Allow
IdentityReference : BUILTIN\Administrators
IsInherited       : True
InheritanceFlags  : ContainerInherit, ObjectInherit
PropagationFlags  : None

Results

Because this was a quick and dirty job, I didn’t make any functions, nor did I output to a log file or anything “nice”.

The “write-hosts” simply output whether the “not inherited” property is set to True/False. When False, it writes the output and goes to the next directory. When True, it writes the output, the property is set to False and the result output, and then the ACLs is pruned with that result also output. These changes are shown in green.

I can’t show any error statuses, because I had none over several thousand folders, and I couldn’t be bothered simulating one.

F:\Users\JONES 'not inherited' is False
F:\Users\BLOGGS 'not inherited' is False
F:\Users\SMITH 'not inherited' is True
F:\Users\SMITH 'not inherited' set to False
F:\Users\SMITH ACL cleanup successful

Script

$DirRoot = "F:\Users"
$Folders = Get-ChildItem $DirRoot | ?{ $_.PSIsContainer } | select -expandproperty Name
 
ForEach ($F in $Folders) {
    $user = $F
    $folder = "$DirRoot\$F"
    $acl = Get-Acl $folder
    write-host $folder "'not inherited' is" ($acl.areaccessrulesprotected)
    if ($acl.areaccessrulesprotected) {
        # folder is not set to inherit perms - set it to inherit
        $isProtected = $false
        $preserveInheritance = $true
        #construct the new ACL
        $acl.SetAccessRuleProtection($isProtected, $preserveInheritance) 
        try {
            #set the ACL
            Set-Acl -Path $folder -AclObject $acl -ea Stop
            write-host $folder "'not inherited' set to" ($acl.areaccessrulesprotected) -foregroundcolor Green
            #now clean up the extra "not inherited" permissions
            Foreach($access in $acl.access)   {
                if ($access.IsInherited) {
                    #skip inherited ACLs
                    Continue
                }
                else {
                    Foreach($value in $access.identityReference.Value)  {
                        #skip the user permission and remove non-inherited ACLs
                        if ($value -notmatch $user) {
                            #construct the new ACL
                            $acl.RemoveAccessRule($access) |out-null
                        }
                    } #end foreach value
                }
            } # end foreach access
            try {
                #set the "inheritance clean-up" ACL
                Set-Acl -path $folder -aclObject $acl -ea Stop
                write-host $folder "ACL cleanup successful" -foregroundcolor Green
            }
            catch {
                write-warning $folder $_
            }
        }
        catch {
            write-warning $folder $_
        }
    }
    else {
        #folder is set to inherit perms - hopefully they're correct!
        Continue
    }
}
Posted in WindowsServer | Tagged , | Leave a comment

The poor person’s way to finding the last patch date on multiple systems

Background

If you don’t have SCCM reporting in your Windows environment (extremely long story) and you need to figure out the last time a server had its OS patched for audit purposes (and your own peace of mind), here is a quick-and-dirty script to find the date the last patch was installed on the server (and, by extension, it was last patched).

Naturally, this isn’t going to help you much if you’re in the habit of installing hotfixes arbitrarily. The assumption is that you regularly install the month’s patches across your fleet, and so the last patch date will reflect when this group of patches was applied.

This script is written for Powershell 4 due to the way I create a custom PS-Object to hold the output. If you’re happy creating custom PS objects with multiple fields yourself, then this could be reworked for PS 3 (which includes Get-Hotfix) or even PS 2.0 if you’re happy to bake your own WMI for the hotfix query. I have Windows 2012 R2 available to me, so I took the easy route.

Syntax

I’m going to describe the major components of the script.

Firstly, gathering the list of computers. We only want servers – if the OperatingSystem attribute of the computer account contains the word “server”, that’s what it is. There are instances where you have a computer account that belongs to a cluster or non-Windows system. I recommend putting something specific to those in the Description field for non-server accounts that you can use as an exclusion filter.

Here we’re using Get-ADComputer to gather up all computer objects in the domain, and filter by the Operating System and whether the computer account is enabled. If you want to target a specific OU, you can use use SearchScope to limit it. Here, we’re gathering OS information as well to help us target our patches, if needed.

$serverlist = get-adcomputer -Filter { Enabled -eq $true -and OperatingSystem -like "*Server*" } -Properties OperatingSystem | select Name,OperatingSystem

Once we have that, we wrap nearly everything else into a ForEach loop. For each server in the list, we’re going to contact it to get the hotfix information, and then store the data. Also, as our loop kicks through each server, it does a quick Test-Connection to see if it responds to a ping. If not, onto the next!

foreach ($server in $serverlist) {
    $connex = Test-Connection $server.Name -Count 1 -TTL 100 -Quiet
    if ($connex) {
        [do lots of stuff here....
    }
    else {
        #server can't be contacted
        $patch =  "offline"
    }
    [and a little bit here...]
}

The guts of the process is using Get-Hotfix to gather up the list of patches on each server. Sure, we could use WMI, but if you have Get-Hotfix available, why go through the angst? This will do a remote call to Windows 2003 servers as well, no problem. It may even do 2000 servers, but who knows? Enjoy the Windows 2003 functionality for a few more months..

You’ll note this looks a bit convoluted. Basically, it’s Get-Hotfix -computername [servername]. We’re not doing PS-Remoting or anything like that (although if you’re trying to gather up info from non-domain servers, for example, you probably have to). However, at the same time, we to sort by the InstalledOn date to readily find the last patch date.

UNFORTUNATELY, if you’re not using a system with a US date format, you can’t simply sort the hotfixes by date and expect it to work (it’s a text date, not actually a date-time). So here we have a complicated custom object, which selects the data into a datetime format using localtime, which we can then sort by InstalledOn, and use [-1] to select the last hotfix listed in the array (which will be the most recent hotfix). From there, we can select the actual date information of the last-installed hotfix.

I’m breaking up the lines a bit below to show the elements in the Get-Hotfix

(Get-HotFix -computername $using:server.Name | 
Select @{l="InstalledOn";e={[DateTime]$_.psbase.properties["installedon"].value}} | 
Sort InstalledOn)[-1] | 
Select -ExpandProperty InstalledOn

Now, the next tricky bit is the fact that if your remote server decides it’s not going respond to a WMI query, it takes 5 minutes (or longer, I gave up) to time out. Obviously with hundreds of servers, your script will take a few days to execute if you leave it to fail to the timeout period.

So what we’ve done is jam that Get-Hotfix command above into a variable called $code, and then we’re going to run that as a background job with a ten second timeout. Once the job returns with the output or exceeds the timeout, the data (the date of the last patch time) is returned if it exists into the $patch variable, and the background job is then destroyed. ($timeoutseconds is set earlier in the script). Yay to the crew at StackOverflow for solving this little quandary.

$code = {
            (Get-HotFix -computername $using:server.Name | Select-Object @{l="InstalledOn";e={[DateTime]$_.psbase.properties["installedon"].value}}| Sort InstalledOn)[-1] | Select -ExpandProperty InstalledOn
         }
         $j = Start-Job -ScriptBlock $code
         if (Wait-Job $j -Timeout $timeoutSeconds) {
            $patch = Receive-Job $j
         }
         Remove-Job -force $j
    }

The last major part to the script is creating a custom PS object to store all the data as it passes by in the loop. Fortunately, I found this really nice constructor for Powershell 4 and up. I am still having problems wrapping my head around this area (for some reason, I had no problem with Perl hashes, which are the same thing, but there were lots of nifty modules that helped shortcut the thing). So I stole this unashamedly to create the custom PS object with three fields to store the server name, last patch date, and operating system version.

What we’re also doing is printing out the data from each server in the console, and then adding the results for each server to the $Results array that’s gathering up everything from $hash as it passes through the ForEach loop. $hash naturally gets new data with each loop.

$hash=[ordered]@{
        Computername=$server.Name
        PatchDate=$patch
        OperatingSystem=$Server.OperatingSystem
    }
    [pscustomobject]$hash
    $results += [pscustomobject]$hash

At the very end, and outside the ForEach loop, we can output our results any way we see fit. Here, I’m outputting just the computer names to one file, and then I’m exporting the full results to a CSV file. With both, I’m sorting by computer name, but you can obviously sort by OS version or patch date.

#output just the computer names, sorted alphabetically
$results | sort computername | select computername | Out-file E:\Server_names.txt

#output the computer names, patch date and operating system, sorted by computer name
$results | sort computername | Export-Csv E:\Server_patches.csv 

Full script is shown below.

Result

This shows the on-screen output, but the file output all works as expected, of course. Note that the patch datetime always shows as 12 am. As it’s a datetime object, you can format it how you like.

patches

Script

$timeoutSeconds = 10
$results = @()

$serverlist = get-adcomputer -Filter { Enabled -eq $true -and OperatingSystem -like "*Server*" } -Properties OperatingSystem | select Name,OperatingSystem

foreach ($server in $serverlist) {
    #re-initialise $hash
    $hash = @{}
    $connex = Test-Connection $server.Name -Count 1 -TTL 100 -Quiet
    if ($connex) {
        $code = {
            (Get-HotFix -computername $using:server.Name | Select-Object @{l="InstalledOn";e={[DateTime]$_.psbase.properties["installedon"].value}}| Sort InstalledOn)[-1] | Select -ExpandProperty InstalledOn
         }
         $j = Start-Job -ScriptBlock $code
         if (Wait-Job $j -Timeout $timeoutSeconds) {
            $patch = Receive-Job $j
         }
         Remove-Job -force $j
    }
    else {
        #server can't be contacted
        $patch =  "offline"
    }
    $hash=[ordered]@{
        Computername=$server.Name
        PatchDate=$patch
        OperatingSystem=$Server.OperatingSystem
    }
    [pscustomobject]$hash
    $results += [pscustomobject]$hash
}

#output just the computer names, sorted alphabetically
$results | sort computername | select computername | Out-file E:\Server_names.txt

#output the computer names, patch date and operating system, sorted by computer name
$results | sort computername | Export-Csv E:\Server_patches.csv 
Posted in WindowsServer | Tagged , , | Leave a comment

LaTeX + Zotero + APA citations on 64-bit Windows

Background

I’ve always liked the idea of getting to grips with LaTeX as a document creation tool. Once the learning curve is over, I think it’ll be much better than something like Word to produce consistently-styled documentation, especially for the grad dip I’m currently doing. Although, since my grad dip is in Information Design, I find myself using Scribus most often – maybe LaTeX can do very flexible “designery” layouts, but I don’t think that’s its strength.

To get LaTeX to a state where it’ll actually be useful to me, I needed it to work with Zotero, which I use as my citation manager, and to be able correctly format citations in APA style. This turned out to be surprisingly tricky.

[U|Li]nix aficionados will tell you that the strength of the system is that you can chain (lots of) little utilities together to do the job. Big, monolithic pieces of software are not the Unixy way. While this sounds great in principle – I also like little utilities that work well and that can talk to each other – it can be like herding cats when you’re not a *nix guru. The LaTeX distributions, including on Windows, are in seeming accordance with this philosophy, with its attendant advantages and drawbacks.

The rig

The environment I have is a Windows 7 x64 laptop. Plenty of memory and the hard drive is an SSD, but this doesn’t really make much difference in the greater scheme of things.

The LaTeX distribution I have is MiKTeX, version 2.9 – I chose it because it’s well-known on Windows, and because it has a great package manager. It installs the Latex core by default and is usable right away. But if you find you want to add a new function that’s not in the core (these are called “packages”), MiKTeX will install it for you (if you’re connected to the Internet) the first time you try and use it. I have the 64-bit version of MiKTeX (this becomes important later).

I’m currently using Texmaker as my LaTeX editor. I’m tempted to ditch that and start using Sublime Text instead (I use Sublime for other coding work, and I like its interface – and it has a nice LaTeX plugin). Texmaker might be better if you’re a “power user” of LaTeX, but I’m far from that. However, it does have a lot of good built-in controls for doing the document-generation steps.

Finally, of course, I’m using Zotero as my citation manager, inside Firefox.

The joy of LaTeX

I’m not going to describe the installation procedure or how the software works on the basic level. Like Linux, there are a lot of distributions out there, and there are a lot of packages out there that do the same thing in very different ways. LaTeX is currently evolving and new packages are being developed and enhanced all the time.

This results in the major drawback that basic tutorials may reference some really ancient version of LaTeX that does things in the most convoluted-possible way. Then you find out some package does it much more easily… oh, no, there are three packages that purport to do it much more easily. And then there are lots of discussions about which one is best, which for a beginner, aren’t that helpful.

Some might argue that it might be best to follow the tutorial based on the old system, but if no-one is doing it that way any more in practice, then I think it’s counter-productive. I certainly wouldn’t advise anyone beginning Windows systems administration now to learn DOS in depth. Although it could be seen to be an advantage to knowing how the command-line works (I have one multi-line DOS script on these pages – the first I’ve written in about 5 years –  it should pretty much be one-liners and Powershell these days).

Getting to the point, citations and bibliographies have sprouted a lot of variations over the years in the LaTeX world. What I’ve done is select the components that will most easily do what I need.

The result I’m aiming for

This is the document I want to produce. A bunch of citations that are done in different ways in the body of the document, which then are formatted in a nice APA-style bibliography at the end.

Please don’t use this as a valid example of correct APA. I may well have errors (I think my second in-line citation is a bit dodgy).

A LaTex-formatted document with generated citations and bibliography

Example document

The code

The code below was used to produce the above document. I’ve added some extra line breaks to make it easier to read. Remember, LaTeX doesn’t care about line breaks anyway. I’ll explain the salient parts of the code in the next sections.

% Creating bibliography from Zotero
\documentclass{article}

\usepackage[utf8]{inputenc}
\addbibresource[location=remote]
{http://localhost:23119/better-bibtex/collection?/0/ABC123X.biblatex&exportCharset=utf8}
\usepackage[english, american]{babel}
\usepackage{csquotes}
\usepackage[style=apa,sortcites=true,sorting=nyt,backend=biber]{biblatex}
\DeclareLanguageMapping{english}{american-apa}

\begin{document}
\author{The Author}
\title{Bibilography with \LaTeX{} and Zotero}
\date{\today}
\maketitle

\section{Introduction}
\LaTeX{} is a document creation system developed by Leslie Lamport.

\section{Some citations}
Here are some citations that are listed in different ways.
\begin{itemize}
    \item An EDRMS is challenging to deploy, say \textcite{DiBiagio}. 
Almost as challenging as constructing a document in \LaTeX{}.
    \item Look at \citeauthor{Bijker1997}'s seminal article 
(\citeyear{Bijker1997}) on bicycles, Bakelite and bulbs.
    \item Technology may be deterministic. \autocite{Chandler}
    \item What on earth is "re-modernization"? \autocite{Latour2003}
\end{itemize}

\printbibliography
\end{document}

Getting the parts together

To get this working, we needed to set up Zotero, then LaTeX. LaTeX needed some new packages, and the code created in the right way.

Zotero

Setup

The core of the solution is the Better Bibtex (BB) add-on to Firefox. What it does is create an export of your Zotero citations in Bibtex format (used by LaTeX), which you can access dynamically. That is, you don’t need to manually export a Bibtex file and then copy it laboriously into your LaTeX document.

I’m not going to get into all BB’s features – also since I feel that the documentation is a bit incomplete for the beginner – but this is what I did to get it working for my purposes. The main things to know are:

  1. I’m just letting Better Bibtex auto-generate the citation codes that you’ll use in your LaTeX document. It’s easy to specify your own. Just remember that if they are auto-specified, they don’t currently sync online (if you’re using the online Zotero synch, and why wouldn’t you?).
  2. I’m doing a “pull export” of the citations via the http engine that Zotero provides. There’s some comment about using curl to do the export, but since it’s not explained, I’m not doing it.
  3. You need to install the add-on from the web page – it’s not on the Firefox addons site.

Once you’ve installed BB, open up Zotero and ensure you have the new BB add-on tab showing up in the Zotero preferences.
Screenshot of Better Bibtex options

Check the following preferences:

  • The Enable export by HTTP option is Enabled. Very important!
  • Set Export as Unicode to Always. This may not be required, because we’re going to specify Biblatex, but best to be sure.
  • I also set the option to export DOI only when a reference has both a DOI and a URL. This is because all my references that have both were gathered from an online subscription service (so I think the DOI only is sufficient). It’s up to you what you prefer, but I was editing all my citations before I noticed this setting.

Info for the next steps

To build the bibliography in the document, you need:

  • The URL for the Zotero collection you’re using
  • The cite codes you’ll use in-line within the document

What I did was arrange the citations I was going to use for the document in one folder, and then get that URL. You can naturally use URL for your top-level folder, but if you have your citations arranged in sub-folders, you’ll need to enable the recursive option in the Better Bibtex preferences.

  1. Right-click on your citations folder and select BibLatexURL.
    screenshot of right-click dialogue

  2. A dialogue box pops up with the URL. It looks like http://localhost:23119/better-bibtex/collection?/0/ABC123X.biblatex. Simply select it, and copy and paste it into your TeX document (anywhere will do for now). Note that the http://localhost:23119/better-bibtex/collection?/0 is the default URL for your full collection.
    URL pop-up box

  3. Paste the URL into your browser, and you should see your citations showing up as a page in Bibtex format. If not, maybe Zotero isn’t running with http enabled. screenshot of bibtex citation in browser
  4. To find the citation keys that you’ll need to use in-line in your document, open each citation and look for the Extra field. You’ll see bibtex: followed by some text. This text after the colon is all you need for your citation key to that cite. In my example, it’s simply Chandler.Screenshot showing location of cite field

That’s it for what you need to glean from Zotero!

Changing biblio generation

What we do for this document is change the bibliography-styling system from the default bibtex to the much newer biblatex

biblatex and biber

For biblatex, this link summarises a bunch of reasons why it should be used, but the reasons that I am using it is that it has a lot of very good pre-baked citation formatting styles (such as APA!), AND it allows us to use our URL from our Zotero database (as a “remote” source).

As well, we’re changing the back-end bibliography-generating engine from bibtex to biber. This supports UTF8 natively, and works well with biblatex and its “remote” feature. The reason the UTF8 is important is that one of my example citations has an mdash (“long dash”) in the title, and I’d have to put it in manually using the old bibtex format. If you don’t write 100% in ASCII, you want this!

biber doesn’t exist in 64-bit MiKTeX!

This won’t apply if you’re using 32-bit MiKTeX or maybe another LaTeX distribution. However, I was pulling my hair out when biber didn’t work for me at first. And it was because it wasn’t installed – it’s a default package in the 32-bit version of MiKTeX. So you need to download the file and extract biber.exe to the $texfm$\miktex\bin\x64 folder ($texfm$ is the folder where MiKTeX is installed – mine’s “C:\Program Files\MiKTeX 2.9”). This link’s comments discuss why it’s a bad idea to do that, and how to create another folder to stash “custom” software, but I could not be bothered. It works fine.

Once I installed the biber.exe file, I needed to open the MiKTeX Admin options and click the Refresh FNDB button.
MiKTeX database rebuild

Finally, since I’m using Texmaker to edit my TeX documents, I needed to update the command to build the bibliography – bibtex by default. In the Configure Texmaker options, I changed the bibtex command to biber % instead.
biber config in Texmaker

TeX document elements

Now that all the prep has been done, there are a few elements that need to go into the document to make it work. This is not going to describe the packages you use to create the base document, or text formatting, like \documentclass{article}, \begin{document} etc. I’m concentrating on the elements required to do the citations.

Including the packages

These all go into the document preamble.

The whole document is using Unicode/UTF8 for its character format to ensure special characters come across nicely, and that gets instantiated by the inputenc (INput ENCoding) package.

\usepackage[utf8]{inputenc}

As discussed at length, we’re using the http interface into Zotero, and in line 5 of the document code, I’ve included the URL for the collection we copied from Zotero. I’ve split it onto two lines to make it easier to read.

\addbibresource[location=remote]
{http://localhost:23119/better-bibtex/collection?/0/ABC123X.biblatex&exportCharset=utf8}

\addbibresource can be used to point to a file [location] on disk or via a network. The latter is what we’re doing here, even though the network in this instance is located on the same computer as the document. The Zotero collection URL must be in curly brackets {}.

Next, on line 9, we’re saying we’re going to generate the bibliography using biblatex (and not the default Bibtex). All that’s really needed to do so is \usepackage{biblatex} – it will generate citations and a bibliography fine – but I want to use the APA style specifically. I also specify backend=biber in the options inside the square brackets []. Biblatex defaults to the biber backend to build the biblio, but you get annoying warnings in the output (and it may be that you change the default for some weird reason and forget).

\usepackage[style=apa,sortcites=true,sorting=nyt,backend=biber]{biblatex}

Finally, the biblatex-apa package (that provides the APA style) requires US English language settings to generate the citations properly. I don’t write in US English, so all this stuff below is simply to get the language settings right to generate APA. babel is a nice package that a whole bunch of language localisation stuff. csquotes helps with reading/creating nice quotation marks depending on the language setting and works well with babel. I can say that omitting these meant the bibliography came out riddled with errors; also, when you RTFM, the biblatex-apa documentation states that US-English is required.

\usepackage[english, american]{babel}
\usepackage{csquotes}
\DeclareLanguageMapping{english}{american-apa}

Adding citations

In the body of my Tex document, you can see a bunch of different ways of creating citations from line 25 onwards. The default citation style for APA is the (Author, Year) format. This format is automatically generated by the \autocite command from biblatex-apa.

To insert one of your citations, you need the citation key from the appropriate Zotero reference (in the Extra field). We found the key for Chandler earlier, so we insert it as the key for the \autocite command, between the curly brackets {}. You place it exactly where you want your inline citation to appear in the document.

Technology may be deterministic. \autocite{Chandler}

This results in the text (n.d. is because the citation is not dated):

Technology may be deterministic. (Chandler, n.d.)

If I used \autocite{latour2003} instead, I’d get the following text (although Latour didn’t write it):

Technology may be deterministic. (Latour, 2003)

Remember that the “2003” has got nothing to do with the {latour2003} citation key I used. I have more than one citation from Latour in my Zotero database, and so Better Bibtex added the year to make it unique. If I had only one Latour citation, then {latour} would have most likely been the code, and would have generated exactly the same citation. biblatex reads the year part of the citation from the proper field in the bibtex URL file.

While I’ve shown a few ways of inserting citations, you should consult The ex­am­ples in the APA style guide document at the biblatex-apa site for many more.

Generating the bibliography

Once you’ve inserted all the citations in your document, there is a terribly complicated process to insert the bibliography.

Go to the point in the document you want to insert the biblio and add the following code:

\printbibliography

That’s it. The APA-formatted bibliography will be generated in its entirety based on the citations that you inserted into your document, formatted correctly, and inserted at the specified location, complete with the heading Bibliography.

This really demonstrates the power of LaTeX – once you’ve got all the preamble sorted out, creating the document is very simple. Once you have a nice template that does what you need, you can recycle it over and over and over.

Compiling the document

Once all this has been done, set up, coded, inserted, you can compile the document. The usual routine for compiling a document is to run latex command to build the base document. In MiKTeX, I’m using pdflatex for this step.

If your LaTeX distribution doesn’t auto-download packages for you, you should download and installed the required packages listed above.

Multiple compilation steps

First and foremost: make sure Zotero is running and you can connect to the citation collection URL!

For creating a document with citations, you need to run the pdflatex several times, and the biblatex/biber command to build the bibilography file. The sequence is as follows (if you’re running it from a command line and not a GUI tool):

  1. Ensure Zotero is running!
  2. pdflatex document.tex – creates the base document, no citations (just question marks as placeholders)
  3. biber – to grab and format the bibliography
  4. pdflatex document.tex – format the document with citations correctly inserted
  5. pdflatex document.tex – format the document again to take care of any page numbering problems

This is the same routine I used with MiKTeX, although they conveniently let you bundle them all into the “QuickBuild” command. Naturally, if you’re not mucking around with citations or indexes, you just need to run pdflatex, and your document is done.

The first pdflatex command loads all the packages that have been specified in the document. With MiKTeX, if they’re not found, it will go ahead and download and install them from the Internet. To save some time, it’s easier to use the MiKTeX admin console to install them in advance. Also, I had a little trouble with the biblatex-apa package. It didn’t want to autoinstall when I ran the pdflatex command from Texmaker, but it was fine from the MiKTeX admin console.

Completed!

Once you’ve compiled the document a bunch of times with pdflatex and biber, you should have a really shiny PDF document with your citations inserted just so. Scroll back up to the shot of the PDF for evidence.

Complete checklist

Here’s all the steps you need to get this working:

  1. Install Better Bibtex into Firefox
  2. Ensure the BB settings show Export as Unicode and Enable export by HTTP.
  3. Get the citations URL for your Zotero collection
  4. Ensure you can open the citations URL in your browser
  5. Install 32-bit biber if you have 64-bit MiKTeX.
  6. Make any required changes for biber instead of bibtex in your Tex editor if you compile documents there.
  7. In the Tex file, insert the \usepackage commands for the required components in the preamble
    • \usepackage[utf8]{inputenc}
    • \addbibresource[location=remote]{BetterBibtexURL}
    • \usepackage[style=apa,sortcites=true,sorting=nyt,backend=biber]{biblatex}
    • \usepackage[english, american]{babel} – required for APA style
    • \usepackage{csquotes} – required for APA style
    • \DeclareLanguageMapping{english}{american-apa} – required for APA style
  8. If your Tex compiler doesn’t auto-install the required packages above, install the packages.
  9. Insert citation codes where required into the document, using the citation key from the Extra field from each Zotero reference – \autocite{key}
  10. Insert \printbibliography where you want your bibliography to be printed
  11. Make sure Zotero is running and the references URL is available
  12. Compile the document four times, using pdflatex -> biber -> pdflatex -> pdflatex. If you’re using plain latex rather than pdflatex, you’ll have an extra step to compile a PDF, if that’s the desired format
  13. Completed!

Final tip: If you get very stuck, go to http://tex.stackexchange.com, and search there for help. If there’s no info that seems relevant, ask the gurus there on advice to solve your problems. Remember to ask sensible questions and show your working.

Posted in Non-work | Tagged , | Leave a comment

Gotcha for promoting new virtual DCs

Background

We were in the process of upgrading our AD 2003 domain to AD 2012 R2, and as part of that, we were promoting some new virtual Windows 2012 R2 DCs into the 2003 domain. All these virtual DCs were running on VMWare on hosts with plenty of capacity. They were running for at least a week prior to promotion with no problems.

The PROBLEM

Within a week of promoting our virtual DCs, one server in one site, and then another in another site, had a CPU spike of 100%. The thing flatlined, and you couldn’t even logon to the server (via RDP or the VMWare console). The only fix was to reboot the server. The server that was first affected had the issue appear within a week of being promoted.

We noticed that the problem exe was svchost.exe, and the actual problem service was WinRM. This was being spiked in the middle of the night, around when the backups were being kicked off.

Process monitor image showing 100% CPU due to WinRM

Not what you want to see on a dedicated domain controller!

We don’t get monitoring alerts sent to us for these events (yes, we know it’s not good), and so we were only becoming aware of the problem when getting into the office in the morning, and finding a DC had been effectively dead for 8 – 10 hours. (Or more – the first one died over a weekend.)

The troubleshooting

Naturally we checked each VM instance thoroughly – the affected virtuals were on different hosts, in different cities, with different network trunking and different storage. The VM hosts themselves had tons of memory, storage and CPU capacity. No ballooning or similar things were going on.

There was nothing much in the Windows event logs, other than a few mystery warnings about WinRM, the CPU and so on. Nothing obvious (to us) about what had caused WinRM to spike, backup failure, permissions problems, nothing.

We did log a call with Microsoft Premier Support, who could not apparently help us with diagnostics after the event, but who wanted us to gather logs while the event was going on. Frankly, we felt that gathering logs when we couldn’t even log onto the box while it was dying was a bit tricky, and I had even more doubts about doing remote logging when WinRM was fully occupied!

But as it happened, we never got a recurrence of the problem, due to the aid of Dr Google.

Identifying the culprit

Just before logging the PSS call, I’d found this TechNet blog on WinRM causing “timeouts or delays”. Nothing about complete system freezes, but it seemed to best reflect our issue. We couldn’t find anything else more specific to our issue. And to add more fun, we were on a short time-frame to remove our legacy DCs to do our domain and forest functional upgrades.

When you install a Windows 2012 R2 (or Windows 2008 SP2 (?) and up) server, it creates a local group, WinRMRemoteWMIUsers_. As the group name implies, if you want to connect to WinRM remotely on the server, you must be a member of this group (Administrators are by default).

Now, what happens to this local group when you promote a domain controller into the (legacy 2003) domain? That’s right, absolutely nothing – the new DC inherits the new BUILTIN user groups. Its own local groups are blown away. In a 2003 domain, there’s no BUILTIN group for WinRMRemoteWMIUsers_ … so the new DC no longer has that group. So – possibly – when the system state backup was taking place (including the registry and AD database), WinRM is being called, the Remote WMI group doesn’t exist, the whole thing fails.

Unfortunately, we couldn’t confirm any of this via the logs – there were no permissions failures that we could see, for example. Just the 100% spike within 5 minutes of certain backups commencing. One thing I didn’t check was whether it was the fulls. It was not every backup (thus making it more tricky to troubleshoot).

The fix

I simply created a new BUILTIN group in the domain called WinRMRemoteWMIUsers_ – do not forget the two underscores at the end – and the problem entirely went away. Never to be seen again.

We didn’t bother confirming with Microsoft before putting in the group. Or wait for another server to fail before trying the logging in the Technet article (or the logs PSS subsequently suggested). Fortunately, it worked first time.

I meanly left the PSS case open with Microsoft for another 2 weeks in case we had a recurrence – and some correspondence to see if we could confirm that as the root cause (no), and whether we’d taken the right action for the symptoms (yes).

According to the article, LSASS might also cause these issues, which would make sense from a DC point of view. But what we saw was the WinRM.

Posted in WindowsServer | Tagged , , | Leave a comment

What is a computer?

A computer is a shopping list?

Someone I know was wondering how to explain to a young person what the major parts of a computer do. How does a computer work, and how do the main parts – RAM, CPU, hard drive (or storage) – interact with each other? And it’s not just young people who don’t know!

The example I immediately thought of is when you use a shopping list to budget for your next trip to the store. The shopping list, your brain, and somewhere to write your “working out” of the sums all add up to the major parts of a computer.

What are all the parts?

Computer storage – a hard drive, USB stick, SD card – is like a shopping list . It stores the information – data¹ – that you want to keep. In this instance, the shopping list will probably have the following information:

  • the items you want
  • how many of each item
  • how much each item costs
  • the total cost of everything, once you’ve figured it out.

The computer CPU (central processing unit) is like your brain . It does all the sums (computations) required to multiply the number of items by how much they cost, and then add all those numbers together to figure out the total. Any sum like 1 + 1 = 2 is a computation. Everything that happens inside a computer is by adding and subtracting numbers, even though what you are seeing is words or images. Before electronic computers were invented, people who did all the calculations for science and so on were often called “computers” – because their brains did all the computing.

RAM (random access memory) is like your working of the sums. It is a very short-term place to store calculations so that the CPU can get to them quickly. You might use a whiteboard or scrap of paper to write down the numbers you need to multiply (you just need the quantity and cost of each item here, not all their names as well). Once you’ve done multiplication with your brain and written the results temporarily on the board, you easily can total up the cost of all your list items. Once you’ve got the final total, you’d write that on your shopping list  (store the new data) for when you visit the ATM to take some money out, and then clean off your workings off the board.

RAM in a computer constantly wipes temporary information off and writes it over with new data that it’s working on. It’s completely wiped when you turn the computer off. That’s why when you are writing something in a document, you must remember to SAVE it so that it gets stored properly on the hard drive before you wipe the temporary data by shutting down (what you type is mostly just in RAM until it is saved).

There are other parts to a computer like a motherboard, which the other parts slot into (like a Lego board), DVD player, graphics card, keyboard, the case and so on. But the main work of a computer is done with the CPU, RAM and storage. It doesn’t do anything more than what your brain does, combined with a way to store the information that you are working on, temporarily, or forever.

Why do we use computers, then?

Now that we know how simple computing is, and that our brains and a piece of paper can do anything a computer does in terms of calculation, why do we use them?

Firstly, let’s look at how much information a computer can store. The usual measurement of how much information can be stored on a drive these days is in “gigabytes”. A byte is about the amount of storage that can store one letter in our alphabet – you can use about 5 million of letters, or bytes, to store everything that Shakespeare wrote – 5 megabytes. That’s about 1500 pages in a book (that would normally be three books). A gigabyte is 1024 megabytes – that’s as many books as you could stack in the back of a pickup trucks/ute. An average smartphone today currently has 64GB of storage, which means that it could store 39,000 big books. This is probably more than the number of books in a suburban library.

Next, there’s how much a computer can work on or compute at once. My computer isn’t very new, and it has two processors that each have a speed of 2.67 gigahertz. Gigahertz is basically a measure of how many millions of “bits” of data per second the processor can work on. A bit is 1/8th of a byte. Because my computer runs a “64-bit” operating system (basically, how many data “pipes” the processor can handle), it can work on over 340 million bits of information per second.

This is in theory, but in the real world, it’s not that simple. When we do multiplication, we learn that 5 x 5 = 25. We do it in one calculation. A computer has to basically add all the 5s together 5 times to get the same result. (Well, not quite like that – we add things in base-10, but computers use base-2 – it really only needs three addition operations to calculate 5 x 5.) What we do in one calculation, a computer tends to do more “expensively” – it needs to do more calculations to get the result. On the other hand, it takes me maybe half a second to remember 5 x 5 = 25. The fastest mental arithmetician can add up 15 3-digit numbers in 1.7 seconds – that’s nearly 27 separate additions per second (although humans actually use shortcuts). My computer could theoretically calculate 5 x 5 = 25 (using 6 number bits and three additions) 18.9 million times in a second. Still 700,000 times faster than the mathematics genius.

Imagine how long it would take to read 39,000 big books. Some people might be able to do it in 106 years (reading one a day). But spending that many years on storing that information in your head – assuming you live that long and remember all of it – would be pretty tough. Once you start working on it, you’re just one person. If you wanted to work as fast as my computer, you’d somehow need to share that information with your 700,000 maths geniuses, or else you would have all needed to spend that 106 years memorising everything first.²

So, to sum up, the main advantage of a computer is that it can store a lot of information, and it can work on everything extremely fast. And it doesn’t get bored. Also, depending on the software, a computer can show a lot of information in many different ways – numbers, words, pictures, sound. The drawback is that it can’t think, or decide for itself how valid the information it’s received is – it can only calculate with what it’s given. The only way it knows that 5 x 5 = 25 is because of the rules that have been programmed into it. If you give a computer the wrong information, or give it in the wrong way, it doesn’t know that, and you can end up getting the wrong results (or no result at all).

Because a computer works so fast with so much data, though, and we don’t normally have 700,000 maths geniuses nearby, we’ve found it worth the effort of programming lots of rules, developing software, and checking the accuracy of the information a computer is working with, in order to get the benefit of that speed in processing so much of the data that’s out there in the world.

Notes

¹ A piece of data is just some kind of fact or measurement. For example, you learn that the price of a bar of chocolate has gone up by 50 cents. That’s data about the chocolate price. Information is about using that data in context, or combining other data points together. With the higher price, the chocolate bar might now be the most expensive chocolate bar in the store. This might mean the store will sell less of them in future. Or, since the previous price of the chocolate bar was $2.50, you might decide that 50c more is too much extra to pay and that the makers are pretty cheeky to ask for that much extra.

² People might notice I’m using an example combining the storage capacity on a smartphone with the processing power of my computer. We could pretend that my computer only stores 64GB of data. But these examples are not supposed to be real – we’re just trying to get a very imprecise impression of the scale of the differences between how a computer works vs what one person can do.

Posted in Non-work | Tagged | Leave a comment

Listing driver names

Background

This is in relation to an article describing how to “downgrade” your Windows 8 install to Windows 7. There are not shortcuts, anyone, it’s a full reinstallation of Win 7 – but you do get to use your OEM Windows 8 licence key. Whoopee.

I’m just thankful my Win 7 laptop is holding up ok (thank you, Dell XPS 15) and I hope it’ll continue to do so until Windows 9. Assuming Windows 9 is like the Windows 7 to the Vista of Win 8.

Anyway, the article covers off obtaining the Windows 7 drivers for your hardware, but asserts there’s no convenient way of listing the drivers from System Manager without simply writing them down. Well, there is in fact Powershell

Quick n easy script

  1. Open Powershell
  2. Run (copy and paste and hit Enter – this is all one line):
    Get-WmiObject Win32_PNPEntity | where-object {$_.manufacturer -ne $null -and $_.manufacturer -ne "Microsoft" -and $_.manufacturer -notlike "(Standard*" } | select-object caption, manufacturer | ft -auto
  3. Look at:
    All the lovely lines on your screen saying the following:

    Razer Gaming Device                                          Razer
    AMD High Definition Audio Device                             Advanced Micro Devices
    Bluetooth AV Source                                          Broadcom Corp.
    Ricoh SD/MMC Host Controller                                 Ricoh Company
    Intel(R) ICH9 Family USB Universal Host Controller - 2934    Intel

OK, there’s still probably going to be a bit of faffing about trying to locate the drivers and figuring out which ones you must have, but generally the by-far easiest method is to go the manufacturer’s website and search for the hardware model you own and then Windows 7 drivers for that model.

Chipset (mine’s Intel) storage (HDD or SSD) and networking are the most important ones. If those are all working and the system boots and gets network connectivity, the others can all be obtained individually.

Posted in Uncategorized | Tagged , | Leave a comment

Fixing up my music library with Powershell

Background

I prefer to encode my music using the OGG format. The majority of my music (still, just) actually comes from my CDs, and I rip them into FLAC. To compress them a bit more for my media player, I convert the FLAC files to OGG, quality-level 7. One day I should really sit down with MP3 again and do a listen test – when MP3s all used to be encoded with a constant bitrate, they weren’t very efficient with size vs quality.

Anyway, one of the drawbacks of encoding an OGG file is that it doesn’t have the ability to embed the album art into the music tags. Or, it can theoretically, but there is no actual implementation for doing so (c’mon, people). While Media Monkey (which I use to manage my music library, and do the ripping and tagging) can cram it in somehow for its own purposes (probably using a custom field – I haven’t bothered looking), it doesn’t work on my portable media player, a Cowon J3. I should actually experiment with the FLAC files – which my media player also handles – since you can tag each track with art too. Another day. In any case, I would like to look at the album art while tracks are playing, and at the moment I can’t (unless it’s an MP3 with embedded art).

What my media player does do is display an image if there is a file called “cover.jpg” in the album folder. Following the usual convention, I rip my albums so there is a separate folder for each one. There’s also a way to “embed” images for single  music files that might be in a “miscellaneous” folder of random non-album music – rename the JPG to the same name as the music file. For example, for “Skatalites-Addis Ababa.ogg”, ensure the associated album art image is called “Skatalites-Addis Ababa.jpg”.

The problem

Media Monkey will helpfully download album art for you (from sources like Google Images, etc) and plonk it into the associated album folder. The trouble is, these files are named “Awesome Album Name.jpg” rather than “cover.jpg”. My media player won’t associate files called “OK Computer.jpg” with the currently-playing track, alas.

All my music is in a folder structure that is like Music > Artist > Album1[,Album2][,Album3]. Naturally, there are hundreds of subfolders and “album.jpgs” in each folder containing music. How to convert all these random names into one consistent name…

Solution

If anyone still needs an excuse to upgrade from XP to Windows 7 (or up), Powershell is it. (Sh, I know you can download it for XP – the OS really is now too crusty around the edges). Changing all those random jpg names to “cover.jpg” is a one-liner. Yay!

Assuming you’re running the script from the top of your music folder tree:

PS C:\Music> Get-ChildItem -Filter "*.jpg" -Recurse | Rename-Item -NewName {$_.name -replace $_.name,'cover.jpg'}

I was really unsure that $_.name (the variable holding the file name of each *.jpg file the Get-ChildItem had found) would work as something that could just be substituted, but there you go. I took the precaution of doing a -whatif, and there was lots of lovely output like the following:

What if: Performing operation "Rename File" on Target "Item: 
C:\music\The Bombay Royale\PhoneBaje Na Remix 12\PhoneBaje.jpg Destination: 
C:\music\The Bombay Royale\Phone Baje Na Remix 12\cover.jpg".
What if: Performing operation "Rename File" on Target "Item: 
C:\music\The Upbeats\Big Skeleton\Big Skeleton.jpg Destination: 
C:\music\The Upbeats\Big Skeleton\cover.jpg"

Executing the actual command completed silently, and all the files were renamed within about 5 seconds.

The Scripting Guys, where I swiped the guts from, also explain how to do substring substitution in file names using the ”-match” operator.

I like it when it takes me almost as little time to figure out a quick command than it does to actually execute it. Doesn’t happen often!

Posted in Non-work | Tagged , | Leave a comment

Unable to ping localhost… and when unique SIDs (or Sysprep) matter

The problem

With much urgency, a project I was involved with required an install of Oracle XE and Apex on a server – any server – that we had supplied as part of a small environment built in network-isolated site. We had a Windows 2008 R2 domain controller and two member servers, all sitting on a VMWare ESX host.

I’m not fond of Oracle installations on Windows at the best of times, but Oracle XE is a small database engine, limited to 1 GB of memory usage. The installation went ok, with one minor stupidity, and we were all ready to start configuring the database. Great, open a SqlPlus connection, enter connect sys as sysdba… and wah-wow, access denied. After a bit more digging, it seemed we couldn’t connect to default database instance. Oracle services were up and running, reboot, same thing.

Yay troubleshooting

Hm. Run up netstat, port 1571 (Oracle default) is listening, blah blah blah. Ping localhost…. Unable to contact IP driver. General failure. Oh dear, haven’t seen that one before. Maybe something strange in the hosts file? Nope. Can’t ping 127.0.0.1 either.

uh-oh

Since the loopback address is a pretty fundamental part of IP networking, I start wondering if something is wrong with the server builds – I didn’t do them myself, but I had installed all the DC, file and print server roles with no problems. Pinging the server’s own IP is just fine, and network operations between the domain servers seem fine. To check, though, I jump onto the other member server – yup, localhost major fail. I reset the TCP/IP stack on one server using netsh, and reconfigure the IP4 settings. This does not help. Jump onto the domain controller for another check. Oh-ho, ping localhost is just fine there.

So, what is the difference between logging on as a domain admin on a DC and logging on with the same account on a domain member server? Basically, for a DC, the domain admin account is the local Adminstrator account as well. With this in mind, I log onto the member servers as local Administrator and the localhost pings work 100%.

SIDs and Sysprep

In the recesses of my mind, something stirs about machine SIDs and weird shit happening on servers when moving between domain and local accounts, particularly in NT days. According to St Mark Russinovich (whose advice I really do rate), there is no requirement to change a machine SID for a modern Windows Server OS. However, if the SIDs are duplicated, it indicates that Sysprep wasn’t run on an imaged OS, which Michael Murgolo emphasised can cause weird problems. These VMs were cloned from a base image.

So I get PSGetSid on one of the machines and run psgetsid \\* – sure enough, all the servers in the domain have the same SID. Oh dear.

Mark Russinovich has this to say about SIDs on domain controllers:

Every Domain has a unique Domain SID that’s the machine SID of the system that became the Domain’s first DC, and all machine SIDs for the Domain’s DCs match the Domain SID. So in some sense, that’s a case where machine SIDs do get referenced by other computers. That means that Domain member computers cannot have the same machine SID as that of the DCs and therefore Domain. However, like member computers, each DC also has a computer account in the Domain, and that’s the identity they have when they authenticate to remote systems.

The member servers do in fact have the same SID as the DC, but I don’t think we’ve run into a problem in this area. It’s more that the duplicate SIDs are a symptom of the no-Sysprep problem.

Fixing up

It would have been interesting to run a SID-changer over the OS to see what the effect was, but NewSid has been deprecated for years (as mentioned by Russinovich). So, I run Sysprep with the “generalise” option, reconfigure the base OS settings, and rejoin the machine to the domain.

Yippee, localhost ping now works.

How Sysprep makes a difference, I don’t know – the Microsoft KB article only alludes to the fact it makes network changes, but I couldn’t find a reference in Technet to explain what.

With localhost now available, Oracle XE installation and logon to the default instance using the system account work just fine.

relief

Posted in NetServices, WindowsServer | Tagged , , | Leave a comment

Sometimes it’s Old Skool

Background

In our organisation, we have two DCHP servers that are running on Windows 2003. It isn’t even an 80-20 configuration. One server is active, and one has all its scopes disabled. Naturally they are supposed to be kept in synch if there are changes made on the primary.

The usefulness of a redundant server might be questioned given the fact that the secondary is sitting in the exact same room in the exact same rack. Anyway, while we are waiting for Godot a new DHCP management tool to be commissioned by another team, we maintain the legacy DHCP.

Our team needs to update DCHP reservations on pretty much a daily basis. I got a bit tired of connecting to both servers to update the entries, and decided to whip up a quick-and-dirty script to allow entries to be created simultaneously. Since the chances of us getting Powershell installed on the server are zero to none, I did it using a batch file calling netsh dhcp commands.

My batch file scripting is slightly less good than my Powershell, but it’s nice for a blast from the past occasionally. Once you remember that when you set a variable, there should be no spaces after the = sign. Too much Powershelling, obviously.

Syntax

One thing to note is that one DCHP scope often covers a number of Class C networks in our environment. Since I couldn’t figure out a way of “guessing” what scope it was from an IP – and compiling lists of literally scores of networks to parse through would take days – this script means that you need to open the DHCP console and get the base IP for the scope you’re adding the record to.

With that in mind, the syntax for the script is:

add-res.cmd [clientIP] [mac] [client] [scopeIP]

The MAC can be formatted as a plain 12-digit hex string, a string divided into blocks of two with dashes (as it would be if you copied it from the output of ipconfig), or (using the creative syntax of a certain network engineer) in blocks of 4 separated by dots. (I got sick of deleting the stupid dots as well.)

(facepalmThis shows what happened the first time I saw d1c9.ef19.f1e9 in a job ticket. I am not exaggerating.)

[Client] is simply the computer name (label) you’re associating with the reservation.

When you run the script, it shows the data it will attempt to pass to netsh and pause to allow time for a last sanity check. When you hit a key, it script connects to both servers and attempts to insert the record. The output of the netsh dhcp commands comes through to the console, so it’s clear whether it succeeded or failed on either server. It will fail if there is an existing entry for the MAC or IP address in the scope (it won’t blow away an existing entry).

Code

While the MS DHCP console will accept a MAC address formatted as groups of two with dashes in between (and convert it to the simple hex string), it’s a bit fussier with netsh dhcp – it requires that there are nothing other than hex characters. And of course nothing supports groups of four with dots in between. So most of the script is actually parsing the various possible inputs to get something that will be useful for the MAC address.

I didn’t bother with getting all pedantic about checking to see if the MAC boiled down to 12 hex characters or the IP address was in dotted octets. If someone can’t copy and paste either of those correctly from a job ticket, they deserve for the script to fail.

For the annoying dots, it was surprisingly difficult to find a nice way to parse a variable (from the command arguments) for a certain character/string, and of course that particular character is normally reserved for other purposes in a batch file. There also ain’t no nice m// or -like commands in batch.

Lines 12 and 13 show where a variable is set to hold the “.”, and then the next line echoes our %2 variable that holds the MAC address to the findstr command. If it finds the pesky dots, the MAC gets the hack and slash treatment (as do MACs entered with dashes).

Lines 16 and 19 show a traditional way of splitting a string by using for /f to treat the text as delimited and extract the tokens between the delimiters. So here we’re splitting on the dots or the dashes. If there are no dashes, the string isn’t actually split, and only the %%a token has something in it – the rest are null (and make no difference when it’s reassembled into the new %mac% variable).

:: Add DHCP reservations to both servers
:: Syntax: add-res.cmd [clientIP] [mac] [client] [scopeIP]

@echo off

if %1.==. GOTO Syntax
if %2.==. GOTO Syntax
if %3.==. GOTO Syntax
if %4.==. GOTO Syntax

REM Strip out any dots if MAC is divided into groups of 4 with dots e.g. d1c9.ef19.f1e9
set y=.
echo.%2 | findstr /c:%y% 1&>nul

if not errorlevel 1 (
    for /f "tokens=1-3 delims=." %%a in ("%2") do set mac=%%a%%b%%c
) else (
    REM Strip out any dashes - ok if already no dashes in mac
    for /f "tokens=1-6 delims=-" %%a in ("%2") do set mac=%%a%%b%%c%%d%%e%%f
)
echo.
echo Adding IP %1 to scope %4 for client %3 with MAC %mac%
echo.
pause

for %%A in (1 2) do (
    echo.
    echo Connecting to \\dhcpsrv%%A
    netsh dhcp server \\dhcpsrv%%A scope %4 add reservedip %1 %mac% %3
)
goto EOF

:Syntax
echo.
echo Syntax: add-res.cmd [clientIP] [mac] [client] [scopeIP]
echo.

As an addendum, if your DHCP scopes are nicely laid out with one scope per Class-C subnet, you can construct a variable for your scope instead of entering it explicitly.

Insert a new line 21 and enter the following code.

for /f "tokens=1-3 delims=." %%a in ("%4") do set scope=%%a.%%b.%%c.0

Delete line 9 and then substitute %scope% for %4 as the variable in lines 22 and 26.

Posted in NetServices | Tagged , , | Leave a comment