Convert Windows Server 2012 Datacenter Eval to Datacenter

If you were lucky enough to have used Windows Server 2012 Datacenter eval edition you might have a need to convert it to a regular release. It can easily be done using PowerShell.

Step 1 is to figure out what edition you are currently running.

dism /online /Get-CurrentEdition

Step 2 is to find the editions you can convert to.

dism /online /Get-TargetEditions

Step 3 is to enter the license key for the full edition. You’ll be asked to reboot the server so say yes.

dism /online /Set-Edition:ServerDatacenter /ProductKey:ABCD-EFGH-IJKL-MNOP-QRST /AcceptEula

That’s all!

Why am I not a Fanboy?

Let me be very clear from the beginning: I support open source wholeheartedly. I support GPL and BSD (two main open source “points of view”, as I see them) equally. I believe in the best tool for the job. This is why I am willing to support closed source as well.

The computing environment I deal with includes: Windows desktops and servers; Solaris servers; Linux servers; pfSense firewalls; Cisco Pix firewalls; and Cisco switches and routers. My primary workstation is a Windows desktop, which runs all kinds of open source applications, from Putty to OpenOffice, from Pidgin to Thunderbird, from Nmap to VirtualBox. At the same time, I use Cisco VPN client, SQL Server Management Studio, TOAD, PCAnywhere, and other closed source, proprietary applications. I do this because I am looking for the right tool for the job.

Linux/Unix servers I deal with are for various purposes: web server (Ubuntu), MySQL server (Ubuntu), load balance (CentOS), firewall (pfSense), Wireshark (CentOS), and proprietary applications (Solaris/CentOS). Any development that I need to do, I do using Python (in very rare case, C++).

To make things even more interesting, my notebook computer is a Macbook running Tiger. Again, I have a whole bunch of open source as well as closed source applications running on it. I surf the web using Firefox, listen to music on iTunes, watch movies on DVD Player, run virtual machines (Windows XP and CentOS 5.2) on VMware Fusion, try out Linux distributions by downloading them using Transmission, and so on.

I used Ubuntu exclusively as my desktop for about a year some time ago. I did not miss Windows at all. But now my requirements are changing, and so is my computing environment. I need all these applications to get things done, and if I feel more comfortable using them in a certain environment, why shouldn’t I?

Macs make excellent workstations. With the power of virtualization in hand, I can use Windows and Linux all at once. Same can be said for Windows and Linux themselves (except Macs can’t be run in virtual machines, for now). I wanted to be a Windows fanboy before I tried Linux. Then I wanted to be a Linux fanboy before I tried Mac. I wanted to be a Mac fanboy before I saw how good both Windows and Linux are becoming every day. There is so much interesting technology out there that it is the most interesting tech time ever. If only Mac OS can be run in VirtualBox or VMware or Parallels, we could have the best of all worlds: choose your favorite OS as your primary and then run all others virtually. Then you wouldn’t have to be a fanboy either.

MS SQL Server 2000: Create an Off-site Standby Server

This post is an extension to the “Poor Man’s Log Shipping” post written earlier on this blog. To summarize, the main server uses log shipping to maintain a standby server on-site. It also creates a nightly full backup and periodic backups of the transaction logs. I wrote a batch script to FTP these log backups once a day to an off-site location.

The reason for this was to create an off-site standby server. With daily log backups being received at this site, I just brought one of the nightly backups here. First things first: how do I get a multi-GB backup file to FTP over the Internet on a high-speed connection? It would take a long time. I could look at several options: FTP, BitTorrent, HTTP, or more. What I liked was the simplicity of FTP. All I had to do was compress the backup file and send it. However, even after compression, the size was multi-GB. So I used 7-zip to compress and also split the resulting file into 100MB chunks. Using the built-in command line FTP client in Windows and the mput command, I was able to transfer the data easily over a period of time. At the receiving end, I again used 7-zip to uncompress the data.

The next step was to restore the backups. First I needed to restore the full backup. I went with the GUI method through Enterprise Manager. All the options required are there but I wanted to understand the process and control it. Therefore, I abandoned the idea and tried the T-SQL approach. This was exactly what I was looking for. I got the best help from Microsoft’s Transact-SQL Reference for Restore.

First thing I needed to do was to get the names of the logical files in the full backup. This is necessary because of some reasons excellently mentioned in the Copying Databases article. The reason for me to do it was the directory structure was different in this server from the server where the backup was created. But how to do it? I got help from RESTORE FILELISTONLY. The actual command I used was this:

RESTORE FILELISTONLY FROM DISK = 'e:\fulldbbackup.bak';

It showed me logical files as well as the full path where the database would actually put the physical files. Since the path on this server was different from what the backup wanted, I had to make sure the database was restored to the correct path for this server. I had to specify exactly where to put the files during restore. The restore script I used was:

RESTORE DATABASE mydbname
FROM DISK = 'e:\fulldbbackup.bak'
WITH
MOVE 'datafile' TO 'e:\dbdata.mdf' ,
MOVE 'logfile' TO 'e:\dblogs.ldf' ,
STANDBY = 'e:\undofile.dat' ,
STATS = 10

I used STANDBY because I needed to restore subsequent transaction log backups. It took some time but the restoration completed. Then I needed to restore the log backups. One thing to remember is that logs need to be restored or applied in the sequence they were created. During my explorations, I noticed that if you try to apply a log backup that was created before the full backup was created, SQL Server will give an error and not proceed. If you apply a backup that has already been applied, it will process the backup but will also say that zero pages were processed. So it is my opinion that even if you make a mistake in applying the wrong log backup, it will not destroy your database. Of course, I did not skip a log backup and apply the next one so I cannot say what will happen if you do something like that. The script to restore one log backup is:

RESTORE LOG mydbname
FROM DISK = 'e:\logs\log1.trn'
WITH STANDBY = 'e:\log_undofile.dat',
STATS = 10

I had approximately two weeks worth of transaction log backups that needed to be restored. I could not manually change the name of the log file for each backup. So I thought of writing a script in Python to read the contents of the ‘e:\logs\’ directory and run the script each time with each file name in the directory. Since I am lazy, I sought an easier way. So I did the following:

In Windows command line, I ran:

dir e:\logs\ > e:\allfiles.txt

This created a list of all the files in that directory. But the format was what you would normally get using the dir command. So I used the find and replace feature of my text editor to replace all spaces with a semi-colon. Then I replaced multiple semi-colons with a single semi-colon. Something like:

Find: ‘ ‘ (it means a single space but without the quotes)
Replace: ;

And then

Find: ;;
Replace: ;

I continued replacing multiple semi-colons with a single semi-colon until I got just one after each data. I then opened this csv-type file in Excel (actually, it was OpenOffice.org’s Calc), copied the column with the file names, and then saved it in a text file.

Again find and replace came to help out. Each file was named like log1.trn, log2.trn, and so on. So I did this:

Find: log
Replace: RESTORE LOG mydbname FROM DISK = ‘e:\logs\log

And another find and replace was:

Find: .trn
Replace: .trn’ WITH STANDBY = ‘e:\log_undofile.dat’, STATS = 10

This created a file with scripts like so:

RESTORE LOG mydbname FROM DISK = 'e:\logs\log1.trn' WITH STANDBY = 'e:\log_undofile.dat', STATS = 10
RESTORE LOG mydbname FROM DISK = 'e:\logs\log2.trn' WITH STANDBY = 'e:\log_undofile.dat', STATS = 10

I saved and opened this file in SQL Analyzer and ran the script. Since there were a whole bunch of the log backup files, it took quite some time to finish the process.

After all the current backups were restored, I made a habit of collecting a week’s worth of log backups and applied them in a similar fashion.

I know this is a very manual process and I could write a Python script once to do all this stuff for me. I intend to write such a script but right now I do not have the time. Besides, this procedure was just for me to learn how to restore backups and then apply transaction log backups.

Some good resources include: (a) Using Standby Servers; (b) SQL Server 2000 Backup and Restore; (c) SQL Server 2000 Backup Types and Recovery Models;

Super Backup System

Having recently gotten burned with the failure of a NAS drive serving as the backup server, I had to design a new backup system. The budget wasn’t limitless and I had to use mostly stuff which was already available. I decided to go with a cousin of the n-tier backup system.

All computers on the network are backed up to a dedicated backup server. It has Windows installed and is configured with hardware RAID 1. As all computers were using Retrospect to backup to the NAS, I decided to keep it. Instead of sending data to the NAS, they would now send data to the new server.

But one more copy was needed to make sure we had our data safe. Another Windows machine was chosen to act as the second level of backup. It was a simple one, without RAID, but loads of disk space. The main backup server would push all of its data to the second (standby) server. Instead of going with Retrospect, I chose to install SyncToy from Microsoft on the main backup server. SyncToy was a very simple setup and with Scheduled Tasks, it was configured to copy all changed files and folders to the standby during the middle of the night.

Each computer would be backed up completely once a week and highly important data would be backed up nightly to the main server. That server would then backup itself to another machine.

This system allowed the following benefits: If any of the computers fail, we have their data stored in a safe place. With RAID 1 if one drive fails we have another one. And if for some reason the main server fails completely we have another copy of the data.

One point of concern: what if there is physical disaster at the location? It would be a good idea to move this data off-site as well. Maybe another server (or similar system) can be setup at another geographic location. The main server would not only backup itself to the second machine but also transfer the data to another system maintained at another location. I did not have this luxury so it wasn’t implemented. But if I could, I would first compress all data and then FTP it. The system would be similar to the Poor Man’s Log Shipping, also on this site. But only changed data would have to be transferred to save time and bandwidth. Nightly full backups would be overkill and inefficient.

For off-site backups, Jungle Disk looks like a good idea. It uses Amazon’s S3 for storage and isn’t very expensive. I am worried about the viability of the company and the software itself. What if they shut down a couple years from now? For personal data that isn’t such a big deal. But would I want to recommend it to the boss? If the company is viable, I see this as one of the best solutions around.

Poor Man’s Log Shipping

We wanted to have off-site backups for our database. SQL Server 2000 provides Log Shipping. With log shipping, a server can automatically copy and restore its logs to a stand-by server. However, it can only be done on a local network.

We do have log shipping enabled at our data center. However, we wanted to keep log backups – as backups, without restoring them – to a remote site. So we implemented a poor man’s log shipping using Windows scheduling, zip application from Info-Zip, and Xlight FTP server.

We chose one full backup as our starting point. Subsequent backups of transaction logs were to be shipped offsite. We used 7-Zip to chop up and compress the approximately 40 GB of database backup into approximately 8 GB of a regular zip file. The settings were kept at their default values, except the part where the resulting zip file was to be split up into 650 MB sizes.

Using batch files (code at the end) scheduled to run once a day, we copy all files created from midnight the previous day up to the point the batch file is run to a temporary location on the local machine. This collection of files is compressed and then sent off-site via FTP. Since we do not have an instance of SQL Server at the remote site, we do not restore these logs or apply them to the full backup taken.

Main Batch File

:: Resources that provided the code or helped write it:
:: http://209.85.165.104/search?q=cache:BWcLwCl7CtgJ:www.experts-exchange.com/Operating_Systems/MSDOS/Q_21135047.html+batch+file+concat+string&hl=en&ct=clnk&cd=1&gl=us&client=firefox-a
:: http://64.85.16.166/adb/adbdos.htm
:: http://malektips.com/xp_dos_0013.html
:: http://malektips.com/xp_dos_0002.html
:: http://www.computerhope.com/batch.htm#4
:: http://www.computing.net/programming/wwwboard/forum/14356.html
echo %time%
c:
mkdir “C:\BKPS-TEMP”
@FOR /F “tokens=*” %%i IN (‘yesterday.bat’) DO set ystrday=%%i
set todaydate=-%date:~10,4%-%date:~4,2%-%date:~7,2%
xcopy “C:\BKPS\*.*” “C:\BKPS-TEMP” /D:%ystrday% /s /q
zip -r “C:\BKPS-TEMP\db1tlogbkp.zip” “C:\BKPS-TEMP”
cd..
cd..
cd..
c:
cd “C:\BKPS-TEMP”
rename bkp.zip bkp%todaydate%.zip
c:
FTP -s:ftp.txt ftp.foobar.com
c:
rmdir “C:\BKPS-TEMP” /s /q
c:
echo %time%

FTP File

username
password
lcd “C:\BKPS-TEMP”
prompt
mput bkp*.zip
quit

yesterday.bat

:: This file gives yesterday’s date, that is, today minus one
:: The code was taken as is from the following web page
:: Get Yesterday date in MS DOS Batch file posted by srini_vc
:: If you are the author of the code and would like the code taken down, please leave a comment and we will get back to you

@echo off

set yyyy=

set $tok=1-3
for /f “tokens=1 delims=.:/-, ” %%u in (‘date /t’) do set $d1=%%u
if “%$d1:~0,1%” GTR “9” set $tok=2-4
for /f “tokens=%$tok% delims=.:/-, ” %%u in (‘date /t’) do (
for /f “skip=1 tokens=2-4 delims=/-,().” %%x in (‘echo.^|date’) do (
set %%x=%%u
set %%y=%%v
set %%z=%%w
set $d1=
set $tok=))

if “%yyyy%”==”” set yyyy=%yy%
if /I %yyyy% LSS 100 set /A yyyy=2000 + 1%yyyy% – 100

set CurDate=%mm%/%dd%/%yyyy%

set dayCnt=%1

if “%dayCnt%”==”” set dayCnt=1

REM Substract your days here
set /A dd=1%dd% – 100 – %dayCnt%
set /A mm=1%mm% – 100

:CHKDAY

if /I %dd% GTR 0 goto DONE

set /A mm=%mm% – 1

if /I %mm% GTR 0 goto ADJUSTDAY

set /A mm=12
set /A yyyy=%yyyy% – 1

:ADJUSTDAY

if %mm%==1 goto SET31
if %mm%==2 goto LEAPCHK
if %mm%==3 goto SET31
if %mm%==4 goto SET30
if %mm%==5 goto SET31
if %mm%==6 goto SET30
if %mm%==7 goto SET31
if %mm%==8 goto SET31
if %mm%==9 goto SET30
if %mm%==10 goto SET31
if %mm%==11 goto SET30
REM ** Month 12 falls through

:SET31

set /A dd=31 + %dd%

goto CHKDAY

:SET30

set /A dd=30 + %dd%

goto CHKDAY

:LEAPCHK

set /A tt=%yyyy% %% 4

if not %tt%==0 goto SET28

set /A tt=%yyyy% %% 100

if not %tt%==0 goto SET29

set /A tt=%yyyy% %% 400

if %tt%==0 goto SET29

:SET28

set /A dd=28 + %dd%

goto CHKDAY

:SET29

set /A dd=29 + %dd%

goto CHKDAY

:DONE

if /I %mm% LSS 10 set mm=0%mm%
if /I %dd% LSS 10 set dd=0%dd%

echo %mm%-%dd%-%yyyy%

Daylight Savings Time

Since server admins have to deal with Daylight Savings Time (DST) twice a year, this issue should be resolved with satisfaction – once and for all – in an environment. A very good post by Paul Randal on this topic is titled How does daylight savings time affect disaster recovery?

An even better idea was given in the comments: Why not set the computer time to GMT (UTC) and store all times in the DB in GMT (UTC) as well? This way one does not need to bother with DST at the server level anymore.