cabal install –global considered harmful…

I had been installing Haskell packages using “cabal install –global”, as root. When the Debian Sid maintainers pushed out an update to GHC (The Glasgow Haskell Compilation system), it broke compilation of my XMonad configuration (because the DBus package was not available to the new GHC). My old xmonad configuration binary still worked, but I couldn’t make any changes to it.

I had been using the base GHC packages in Debian, but installed extra packages using the “cabal” installer. Even worse, I had been passing the –global option, since I was running “cabal install” as root. Since I had installed DBus using cabal in this way, when GHC updated to 7.6.3, DBus no longer worked.

I discovered this article describing some pitfalls of using cabal. Unfortunately none of them told me how to get out of this predicament. Thanks to typoclass and merijn in #haskel@irc.freenode, I was able to restore functionality (this time without using cabal to install the DBus modules). Here’s how I did it:

  1. I moved the GHC global directories to a backup location. To determine these I used the command:
    ghc-pkg list

    I did have to recreate the /var/lib/ghc/package.conf.d directory, so apt would not complain that it didn’t exist (and thus fail to remove the ghc package).

  2. I removed ghc using the command
    aptitude purge ghc libghc-? cabal-install xmonad ghc-doc ghc-haddock

    This is not actually the command I ran, but after repeating these instructions on my primary home workstation and laptop, I wanted to put everything into one command to make it easier for next time.

  3. Installed the DBus modules through apt:
    aptitude install libghc-xmonad{,-contrib}-dev libghc-dbus-dev xmonad

    This appears to have pulled in most of the packages I needed.

  4. Recompiled my XMonad configuration with the command:
    xmonad --recompile

    No more compilation errors!

From now on I will try to find a Debian package which provides the Haskell modules I need, rather than resorting to using cabal (note that cabal-installer is explicitly removed here). Granted, my Haskell installation will not be current with upstream, but hopefully a GHC update via apt won’t break the compilation of my xmonad.hs file again. If I do need to use cabal to install Haskell modules, I’ll use it as my normal user, not root. That way my cabal configuration and local user ghc files can be wiped out more easily.

Increase random entropy pool in Debian sid

Hopefully this will be a short post. I saw some folks in IRC (one of the many #debian channels I’m connected to) chatting about /dev/urandom and /dev/random, and increasing its available entropy pool. This entropy pool is where all the random numbers generated by a Linux system come from. The higher the entropy pool value, the more truly random the pseudorandom number generator (PRNG) like /dev/random and /dev/urandom can be. This has a specific impact on computer cryptography: if your random number pool is low on entropy, its sequence of random numbers can be guessed relatively easily. I found this Wikipedia article which briefly describes the technology on various operating systems.

The operating system file (well, in the /proc pseudo-filesystem) which displays how much entropy my system currently has is /proc/sys/kernel/random/entropy_avail. It will change over time; to watch it change I used this command:

watch -n 1 cat /proc/sys/kernel/random/entropy_avail

This showed my entropy fluctuating between 100 and 200, which is pretty low and not very useful (or secure). I did some research to try and discover a way to increase this entropy pool. Probably the best option is a hardware random number generator (HRNG), maybe sometimes called a true random number generator (TRNG). These cost money, money I don’t have for spending. I found randomsound, but running it did not appear to affect my entropy one way or the other (probably because on my home machine I don’t have a mic). I found this blog post, but it initially suggests a questionable method to increase entropy. Its update, quietly hidden at the top of the post, gives the solution I came upon.

The solution was to use haveged. This uses nondeterministic optimizations available in modern CPU hardware as its random source. When I ran it with the default options, my entropy pool shot up to between 1024 and 4096. Much improved. In a post further on down on Chris’s blog, someone suggested using the /proc/sys/kernel/random/poolsize as the lower threshold, with the -w option. Debian provides an /etc/default/haveged file where you can place these options:

DAEMON_ARGS="-w $(cat /proc/sys/kernel/random/poolsize)"

Currently, poolsize is set to 4096. Should a new kernel from the Debian team set this pool to be different, haveged will automatically be set to whatever value it is. I have successfully set this on my main workstation machines at work and at home. I will set this on my laptop and my VPS systems, and see how it goes.

UPDATE: All but one of my VPS systems was able to use haveged. The one outlier was because it’s on an OpenVZ VM system, and I don’t have access to those particular parts of the kernel (even as root). I have relegated that VPS to being just a toy, since I can’t really use it for much else. I will probably cancel my subscription to that altogether. We’ll have to see about that.

KeePassX Database merge…

I’ve done a lot in the past three weeks. I finally went back to Mardi Gras in New Orleans, something I haven’t been able to do since I started working in Huntsville. I also managed to move this site to my old VPS. The edis.at OpenVZ VPS was a little under powered for my tastes, though its pricing is very attractive. Turns out the original Apache2 documentation I first found (I can’t find the link for the life of me) was wrong: you CAN have multiple SSL sites behind one IP address, thanks to a browser extension: Server Name Indication (SNI). This Google search should help you find the appropriate articles to set it up. All I did was backup both WordPress databases on my two sites, backup both sets of WordPress files, copy the relevant apache2 configurations over to my ChunkHost, restore the database to the new server, and restore the WordPress files (to a different web-root). It worked like a charm!

While I was in Mobile last week, my ISP (Knology) decided to do some network maintenance. My IP address changed, and my pfSense router hadn’t been configured to automatically obtain a new lease. Thus I was unable to reach my KeePassX database. I will get to the KeePassX database merge below. To rectify the WAN DHCP lease problem, I discovered these instructions. Since Knology changes my IP address so infrequently, I’m not likely to even notice the problem until I look at my IP address.

When I was down in Mobile last week, I went to the USA Career Fair to seek employment in Mobile. Got a lot of good leads, so I started filling out online applications. However, saving my passwords to KeePassX proved to present me a problem: I didn’t have access to my password database via sshfs. This meant I had to use my local copy. This meant that anything I added or deleted from the local copy wouldn’t be reflected in my master database. When I got back to Huntsville, and sorted out my WAN connection problem, I needed a way to merge the databases (I did not want to do it by hand).

I noticed that KeePassX has import and export functions, but no explicit merge. I didn’t want to import the laptop local database into the master, since I was afraid of a lot of duplicates. I did find this KeePassX forum topic, that presents some solutions. The patch that was linked isn’t directly accessible to the public, and it’s unclear whether it was added to keepassx on Debian sid. However, further down that page, someone had posted a public-domain Python script which will merge the two databases. Here’s a link to the script. I backed up my databases in case something went wrong, and actually renamed my master database to avoid overwriting it in place.

Basically you provide three XML database names, the first source, the second source, and the destination file. However, it only seems to add the entries that are in both files, plus the ones that are only in the first. Since I had entries that were unique to both files, it appears that all I had to do was run kdb-merge twice, and just swap the first and second source. This is essentially what I ran:


kdb-merge master.xml laptop.xml merged.xml
kdb-merge laptop.xml master.xml merged.xml

The true test was loading up the merged.xml file into KeePassX. I loaded a new, blank database (which it turns out I didn’t have to do; importing an XML file apparently creates a new database). I then made sure the different entries from both files were there. I still have the backup files, should something be missing or be totally wrong.

One final step was to shred the XML files, since they contain the passwords in plaintext format. A simple rm/remove/delete would not do, since most disks don’t overwrite a deleted file (and its contents remain on disk). Perhaps if I had SSDs in this system it’d be different. That’s for the next workstation.

Getting Windows 7 to boot from Debian Linux PXE boot, round 439…

The guide that I found, Ultimate Deployment didn’t work straight away. I ran into a bug generating the BCD file, probably because the winpe.wim that I generated is corrupt. So maybe it is difficult to get this working after all. But I’m not ready to give up yet; wimlib contains an implementation of the Windows IMage manipulation tool, imagex.

The first step in the Ultimate Deployment guide is to set up the DHCP server and TFTP server. I already did that for Debian and Fedora (see my previous posts here, and here. Rather than scrap what I had before, I will remove everything which I know doesn’t work. Here’s my current, broken Windows PE setup:


.
├── boot
│   ├── bcd
│   └── fonts
│       ├── chs_boot.ttf
│       ├── cht_boot.ttf
│       ├── jpn_boot.ttf
│       ├── kor_boot.ttf
│       └── wgl4_boot.ttf
├── bootmgr.exe
├── debian
│   ├── debian.menu.bak
│   ├── menu.cfg
│   ├── sid
│   │   ├── amd64
│   │   └── i386
│   ├── splash.png
│   └── wheezy
│       └── amd64
├── fedora
│   ├── 18
│   │   ├── initrd.img
│   │   └── vmlinuz
│   ├── menu.cfg
│   └── splash.png
├── hiberfil.sys
├── pxeboot.com
├── pxelinux.0
├── pxelinux.cfg
│   ├── default
│   ├── logo.png
│   ├── main.png
│   ├── pxe.conf
│   └── vesamenu.c32
├── wdsnbp.com
└── windows
    ├── 7.x64
    │   ├── __boot_metadata__
    │   ├── bootmgr.exe
    │   ├── boot.sdi
    │   ├── install.cmd
    │   ├── pxeboot.com
    │   ├── wdsnbp.0
    │   ├── winpeshl.ini
    │   └── winpe.wim
    ├── __boot_metadata__
    ├── menu.cfg
    ├── pxeboot.com
    ├── remap
    ├── splash.png
    ├── wdsnbp.com
    └── XP

I will remove everything from the windows/7.x64 directory, as well as all the /boot and windws related files in the TFTP root (/srv/tftp). One thing I want to make sure, is that WIM and BCD files (and any related helpers), are regenerated. Now just the windows/menu.cfg and splash image exist.

Next step is to set up the Samba share (which I’ve already done, but will go over again). The deployment guide suggests /work/sambashare as the path; I chose /var/spool/mirror/windows. The Windows 7 ISO I have is mounted oun /var/spool/mirror/windows/win7.x64.sp1/. The “smbclient –no-pass –list localhost” command confirms that REMINST is a known share. After some tribulation with midnight-commander (Debian does not compile SMB VFS support into mc by default), I was able to connect and browse the shares on my home workstation. I even created a symlink from “win7” to “win7.x64.sp1”, the former of which is the directory the deployment guide suggests to put in the Samba share root.

The guide has the administrator enable a log monitoring tool called “binl”. This will be mostly useless unless we set it to the tftp log (which on Debian is /var/log/daemon.log). I also had to ensure that a very verbose option was set in /etc/default/tftpd-hpa, adding “-vvv” to the TFTP_OPTIONS variable.

Part 3 of the deployment guide is where I decided to diverge. Rather than use the decidedly-closed source, yet still buggy tools they provided, I will use wimlibs’ imagex. The first step is to mount the image(s) in the Windows ISO root:/sources/boot.wim. Since there are two images within the boot.wim (Windows PE, and Windows Setup), I mounted both (at /mnt/win and /mnt/win.2, respectively. The first task is to find and extract the following files:


//windows/boot/pxe/pxeboot.n12
//windows/boot/pxe/bootmgr.exe
//windows/boot/pxe/wdsnbp.com

The Windows PE installation process expects these files to be in the TFTP root (/srv/tftp). So I placed them there. It looks like much of the deployment guide sets up is an unattended installation procedure. That’s not my goal right now; I’m just trying to launch a typical, attended Windows 7 Professional install, all from PXE. I’ll go with the stock boot.wim and see what happens… Looks like it’s a success! At least, I’ve gotten farther than I have ever with this Win7 attempt. I’m now at the Windows 7 “Setup is starting…” screen, I have a rotating blue “wait” ring… positive signs! Hrmm, it doesn’t work, the WDS client can’t connect to my TFTP server.

First thing I’m going to do is remap everything so the Windows files aren’t littering my TFTP root. Simple enough. I won’t worry about the files conflicting with any other Windows installs I want to put on here (since that list is probably effectively nil). Now it boots, but the installer can’t find the Windows Deployment Service (WDS), because I don’t have one. I will continue with the instructions from Ultimate Deployment, starting with section 3…

And I got lost. I stopped making notes. I ended up using the install.cmd. Rather than post the contents here (since it’s rather large), I’ve linked to it here: install.cmd. Since I was using the stock bcd file, it didn’t have the hive keys in it that the original install.cmd references. I hard coded my internal IP address, and had it call setup.exe on the mounted ISO.

Now, all I need to do is find a valid Windows 7 x64 Professional registration key, and I’ll be able to test the entire setup!