December 23, 2012

Ubuntu 12.10: Connect to Microsoft VPN

I recently upgraded to Ubuntu 12.10 on my main desktop machine from scratch, which means a number of things which had been installed and configured need to be re-done. One of those things is my VPN connection to work, which runs Windows 2008 Server for VPN.

If you have ever tried to configure a linux machine to connect to a Microsoft-based VPN, you know that it is not as straightforward as it could be. It is more of a voodoo ritual than a science. I figured it would be a good idea to capture the steps for future reference.

This first part is adapted from the Ubuntu Wiki for posterity. You can check out the original at under the heading VPN setup in Ubuntu 9.10. Apparently this originates from a Ubuntu forums post by user sweisler at Thanks sweisler, wherever you are.

First, there was no need to install any additional packages, apparently everything needed is included by default.

  • Open VPN configuration screen:
    • Click on the network icon in the upper right of the desktop
    • Go to the VPN Connections menu
    • Select Configure VPN…
  • Add a new PPTP connection
  • On the VPN tab, set the following:
    • Connection name (whatever you want)
    • Uncheck Connect automatically (you can change this later)
    • Gateway (this is the VPN server)
    • User name (for domain-based user accounts, use domain\username)
    • Do not set Password; do change the pulldown to Always Ask
    • Do not set NT Domain
    • Uncheck Available to all users (this works either way, but I am assuming you don’t really want your kid to have access to the VPN)
  • PPTP Advanced Options (Advanced button from the VPN tab)
    • Uncheck all authentication methods except MSCHAPv2
    • Check Use Point-to-Point encryption (MPPE)
    • Leave Security set at All Available (Default)
    • Check Allow stateful inspection
    • Uncheck Allow BSD data compression
    • Uncheck Allow Deflate data compression
    • Uncheck Use TCP header compression
    • Uncheck Send PPP echo packets (this setting works either way, check it for debugging purposes)

At this point save it and test it. Once the VPN connection is working you may want to try to tweak it further as described below.

One problem with the VPN I connect to is that all traffic ends up using the VPN when I am connected. This is less than ideal if you are connecting to servers on the internet while the VPN is connected since the traffic goes through the VPN server before coming to you. The following describes the settings for routing only the proper traffic to the VPN. (Read them all the way through first to make sure you have all the necessary information.)

  • On the IPv4 Settings tab
    • Set Additional DNS servers using the IP address of the DNS server for the VPN. (You may need to ask your IT guy for this; there should be a way to discover it when connecting as above but it escapes me.)
    • Set Additional search domains. Set this to the domain suffix of the machines on the VPN. For example, if the machines like then set it to
  • Click the Routes button.
    • Check Use this connection only for resources on its network
    • Add a route:
      • For Address, use the internal IP address of the VPN server applied against the netmask below, e.g. if the VPN server is and the netmask is, use (this is different in 14.04, in the past one could just use the IP of the VPN server). Again, this should be the internal IP address for getting to the machine in the intranet, not the external IP address for getting to the machine from the internet.
      • For Netmask, use the netmask of your intranet. (If you are confused, ask your IT guy what to use for both this and the Address.) For most networks this will be, but for many it will be different.
      • For Gateway, use the external IP address of the VPN server. This should match the Gateway defined on the VPN tab. (I’m not sure what happens if you are using a server name there. I suspect you should match the names, but you may need to experiment.)
      • Do not set the Metric unless you know what you are doing.

OK, so now when you connect you should see regular traffic going directly to the internet and intranet traffic directed to the VPN server. You can test this out with traceroute (which you may need to install). You should also be able to refer to machines on the intranet using their short names (e.g. dbserver instead of

Let me know how these instructions work for you and what type of systems you’ve been able to connect.

December 16, 2012

Ubuntu 12.10: Minidlna on Boot

A big hat tip to Asaf Shahar for this one.

I recently upgraded to Ubuntu 12.10 on my main desktop machine from scratch, which means a number of things which had been installed and configured need to be re-done. One of those things is minidlna, a lightweight DLNA server.

If you don’t know, DLNA is a protocol for sharing media to devices. In my case, I use it to stream music, video and pictures from my desktop to my Blu-Ray player. It’s not perfect, at least in that the interface on the Blu-Ray player leaves much to be desired, but it works.

Minidlna does not come standard with Ubuntu, but it is in the repositories and installation is as easy as sudo apt-get install minidlna. Afterwards you configure /etc/minidlna.conf and you are good to go. The comments inside /etc/minidlna.conf are good enough that you won’t need further guidance from me here.

While all that was relatively painless, I soon discovered that minidlna was not starting on reboot. I checked the script in /etc/init.d and the symlinks in the various /etc/rc#.d/ directories and everything was correct. It started no problem by hand after boot using sudo service minidlna start. It was time to take the fight to google.

I quickly discovered a bug report that seemed appropriate. Apparently minidlna is attempting to start before networking is up and that causes it to error out. (If you have changed to Ubuntu 12.10 you may have noticed that networking does not start until after the login screen is displayed — a curious decision.)

If you have taken a look at the bug report, you’ll see a work-around posted by Asaf Shahar. He created a script using upstart to transform minidlna into a service managed by that tool.

I wasn’t quite convinced that was the answer for me, because that solution would skip whatever set up occurs in the /etc/init.d/minidlna script. At first I thought I could change his upstart script to simply invoke the minidlna script instead of the executable, but upstart was not created with that use in mind. (When creating an upstart service, upstart expects the executable to fork 1 or 2 times; a script will fork a great number of times and there is no way to tell upstart which process is actually the server process.)

The next natural solution would be to integrate the /etc/init.d/minidlna script into Shahar’s upstart script. But that would mean keeping the script up to date whenever the /etc/init.d/minidlna script changed. That’s not really something I want to have to do; after all I might even miss the fact that minidlna was updated. So another solution was needed.

In the process of researching upstart I discovered that one can use it for more than just servers. It can also execute a task (i.e. a short-lived process expected to finish on its own). I decided to have upstart run the minidlna script once the machine booted and networking was enabled. Now this is not perfect in that minidlna is still trying to start and failing on the normal boot process, but I can live with that.

To do this, create a file called /etc/init/start-minidlna.conf with the following contents:

# Task to start Minidlna - lightweight DLNA/UPNP media server
# Minidlna is not starting correctly on boot, see bug
description "Task to start minidlna"

start on (local-filesystems and net-device-up IFACE!=lo)


exec service minidlna start

That’s all there is to it. Now minidlna starts on boot. Once again I want to thank Asaf Shahar for his help.

December 9, 2012

Installing Ubuntu 12.10 on an SSD, Part 3

Recently I took the plunge and put an SSD drive into my desktop. Since I needed to re-install the OS, I figured I would install the latest Ubuntu, version 12.10. I went over my trials and tribulations of getting the OS installed in part 1, and dealt with swap in part 2. Today we finish up the tweaks for the SSD.

First we add noatime and discard to the /etc/fstab options for the drive:

UUID=abc...    /     ext4    errors=remount-ro,noatime,discard 0       1

(Note I cut my UUID for brevity. Yours should be much longer.) The noatime option keeps the OS from updating the access time of a file every time it is read. This reduces the number of writes to the SSD drive. The discard option enables the TRIM command on the file system. There are some details to delve into for this depending on your particular SSD, read about it at For most drives you’ll want it enabled.

Next we change the I/O scheduler to use the deadline algorithm instead of elevator algorithm. For a traditional hard disk drive, the data is read from the platter by a head. The head must be moved to the correct position to read the desired data. The disk spins, which means the head need only move in one dimension, radially. For this reason, the scheduling algorithms for hard disk drives are referred to as elevator algorithms.

The SSD does not have the same limitations as the HDD. There is no head to move into position in order to read or write the data. So it is not really important what order the I/O is scheduled. There is no reason to force some I/O operations to wait while others, requested later, are served as would happen in the elevator algorithm. So instead we use the deadline algorithm, where each operation is assigned a deadline.

Since we have a mixed environment, we cannot just set it system wide via grub kernel options. That is, we can’t just use the elevator or deadline algorithm for all disks since we have both HDDs and SSDs. In order to use the correct algorithm with the proper drive type, create a file in /etc/udev/rules.d with contents:

# set deadline scheduler for non-rotating disks
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"

# set cfq scheduler for rotating disks
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="cfq"

I named mine 99-scheduler.rules. The files in this directory are automatically processed by udev, the device manager for the kernel. Note that cfq, or Completely Fair Queuing, is the default. For more information, go to

The next tweak to make is to do something about the /tmp directory. This is another change that is designed to reduce writes to the SSD (an overblown concern) but you might appreciate the effects on your system in any case. I decided to mount the /tmp directory in RAM. Add the following to /etc/fstab (or replace any existing mount for /tmp):

none /tmp tmpfs defaults 0 0

By default this uses about half the RAM for /tmp. If you want more control, you can use a line like:

tmpfs /tmp tmpfs nodev,nosuid,size=7G 0 0

Keep in mind that certain operations that write large files to /tmp might be adversely affected; for example, burning a large DVD.


December 5, 2012

Installing Ubuntu 12.10 on an SSD, Part 2

Recently I took the plunge and put an SSD drive into my desktop. Since I needed to re-install the OS, I figured I would install the latest Ubuntu, version 12.10. I went over my trials and tribulations of getting the OS installed in part 1, today we are going to talk about some changes I made afterwards to support the SSD.

First things first: dealing with the swap partition. I had a swap partition on the old HDD but what is the point of having all your programs load quickly if swap is going to be on the old, slow disk? I decided to deactivate the swap partition and go with a swap file moving forward for maximum flexibility.

Everything I read said that modern linux kernels will perform just as well with a swap file as a swap partition. The only documented drawback I found was that the Ubuntu hibernate implementation (that’s the OS hibernate function not the Java Hibernate persistence engine) does not work with a swap file and requires a swap partition. Since I never use that functionality, I was good to go:

# Create the swap file as an empty, 8GiB file
sudo fallocate -l 8g /mnt/8GiB.swap
# The swap file should not be readable by normal users, otherwise they
# could snoop on the memory of other user's processes
sudo chmod 600 /mnt/8GiB.swap
# Format the file as swap
sudo mkswap /mnt/8GiB.swap
# Tell the OS about the new swap file
sudo swapon /mnt/8GiB.swap
# Check if it worked
cat /proc/meminfo
# Determine old swap partition device
sudo fdisk -l /dev/sdb
# Decommission the old swap partition (your specific partition will vary)
sudo swapoff /dev/sdb6

At this point there is some housekeeping to do in /etc/fstab file to ensure the changes persist on the next boot. Remove the line for the old swap partition and add the following for the new swap file:

/mnt/8GiB.swap  none            swap    sw              0       0

Now we want to balance the fact that the swap file is on the SSD with the desire to reduce writes to the SSD to prolong the life of the drive. (Although I am of the opinion that such concerns are overblown, I like the effect of this change anyway.) We will tell the system to prefer RAM over swap using the swappiness setting. Add (or edit) the following in /etc/sysctl.conf


If you are interested in the technical details as to what these settings do, check out the documentation at

With the swap configuration complete, we turn our attention to a couple of other tweaks for the SSD performance. However, that will have to wait for the next entry.


December 2, 2012

Installing Ubuntu 12.10 on an SSD, Part 1

Recently I took the plunge and put an SSD drive into my desktop. Since I needed to re-install the OS, I figured I would install the latest Ubuntu, version 12.10.

I had hoped things would go smoother. When running the installer it would hang after the second screen the first time through (unfortunately I ended up installing a couple of times). This is the screen where they tell you to verify the computer has enough space and is connected to the internet (see Aftering clicking Continue I would just get the spinning mouse icon. Fortunately the Quit button worked and dropped me into the sample desktop. From there running the install again worked more smoothly.

I kept my old installation around on the old disk for reference. For example, it would make re-configuring my NFS shares easy. So I had three disks: my large HDD for /home, my old OS HDD disk mounted on /u1204 and the new SSD which would contain 12.10. The installation completed and I rebooted.

And was promptly dropped back into my old installation. Must be a BIOS issue, I figured. I rebooted and went into the BIOS set-up. It took me a while to realize that my BIOS was a little different from what I had been used to (a few months ago I upgraded from an ASUS to a BIOStar mainboard). For boot order you first set the categories of disks (e.g. DVD-ROM, SATA drives, removable media) then you set the individual disks within the categories. This is obfuscated by using the first disk of the category when setting the category. The upshot was after examining all of this it sure seemed like the BIOS had been correctly configured all along (which turned out to be the case).

Next I figure I would go to the boot menu and select the SSD disk explicitly. OK, reboot and watch for the key to enter the boot menu. Oops, there doesn’t seem to be one. I finally figured out that the BIOS set-up contains the boot menu within it. One more mystery solved. Unfortunately selecting the SSD drive in the boot menu did not help.

OK, I’ll boot into my old OS and starting googling the problem. I log in and am immediately treated with an error message concerning updating the .ICEauthority file and then dropped back to the login screen. I use Ctrl-Alt-F1 to get to the terminal and log in that way. Fortunately that works. I fix the permissions on the .ICEauthority file (which looked fine to begin with) and no luck. I delete the .ICEauthority file and no luck. Finally I think to check the permissions on the /home/<user> directory. That was the problem: I had created myself first on this installation whereas in the old OS I had created a different user first. The 12.10 changed the owner of the home directory which to the 12.04 is a different user (that is, the UIDs were different for the two users and the file system only tracks the UID). I fixed the ownership and that did the trick.

Onto the boot issue. The consensus on google appears to be to try the Boot Repair utility ( So I booted the Ubuntu Live USB, install Boot Repair, fix the settings and run it. I got an error message to the effect of: The disk is GPT and the BIOS is non-EFI, so create a Boot BIOS partition on the disk (<1MB, unformatted, with the bios_grub flag).

A little googling and I find that wikipedia actually has the information I need ( I run GParted from the Live USB, resize the existing partition on the SSD (take space from the end, since it doesn’t matter where the BIOS boot partition is) and add an unformatted partition at the end. GParted wouldn’t let me add the bios_grub flag at creation time, I had to save the partition changes first.

At this point I figure I would run through the installation again, so I can fix my user issue and get grub installed correctly in one swoop. This time everything goes well and grub is installed.

Doing research afterwards for this write up, it looks like there were a couple of other potential solutions. I could have looked in the BIOS to see if it supports EFI and enable it. I also could have used GParted to use a different partition table than GPT. (GPT is apparently the default used by the Ubuntu installer, it never gave me the option for anything else.)

I ended up filing a bug report on this, we will see what happens:

Next up, for part 2, we get into some of the details for configuring the system for the SSD.


October 24, 2012

Conditional Component Binding in JSF

In the process of converting an application from JSF 1.2 to JSF 2.1, I came across the following structure:

<h:inputText value="#{myBacker.myValue}">
    <c:if test="#{not empty myBacker}">
        <f:attribute name="binding" value="#{myBacker.myInput}"/>

The intent of the code is clear: if the backing bean is defined, apply the binding. While it worked in JSF 1.2 with Facelets, in JSF 2.1, using the built-in implementation of Facelets, the code caused a javax.el.PropertyNotFoundException to be thrown during the Restore View phase.

So a workaround needed to be found. One possibility was to rewrite the include file to be two separate include files: one that requires the binding and one that omits it. But then we have a bunch of repeated code and a lot of callers to modify. Another possibility was to ensure that the backing bean always existed, but that requires a lot of unnecessary beans and a way to handle cases when the include was invoked multiple times.

In the end I decided to write a Facelets TagHandler that would conditionally bind the component depending on whether the binding was well-defined. Note that I am taking advantage of the fact that we are only using the binding to set the component on the backing bean; if the backing bean was providing the component to the view something else would need to be done.

public class BindingHandler extends TagHandler
    private final TagAttribute parentBinding;

    public BindingHandler(final TagConfig config)
        parentBinding = getRequiredAttribute("parentBinding");

    public void apply(final FaceletContext ctx, final UIComponent parent)
        throws IOException
        final ValueExpression bindingExpression =
            parentBinding.getValueExpression(ctx, UIComponent.class);

            bindingExpression.setValue(ctx, parent);
            parent.setValueExpression("binding", bindingExpression);
        catch (final PropertyNotFoundException ignored)

For the actual production code, I included some tests to try to keep the PropertyVetoException from being triggered. That is, I tried to suss out whether setValue would throw a PropertyVetoException before invoking it. I figured whatever tests I came up with would be more performant that depending on an exception being thrown.

All that was left was to configure the TagHandler in my taglib.xml and using it on the page. The configuration is straightforward:


and so is the usage:

<h:inputText value="#{myBacker.myValue}">
    <p:binding parentBinding="#{myBacker.myValue}" />

Now I get conditional binding and I don’t have to change any clients of the include.

September 30, 2012

An Io Guessing Game

So when I get the chance I am working through Bruce Tate’s Seven Languages in Seven Weeks (it might end up being seven years in my case) and commenting on it when the mood strikes. I am currently working through the exercises for Io Day 2 and today I was contemplating the final exercise.

In this task, Tate asks us to write a program that will generate a random number between 1 and 100 and then prompt the user to guess it within ten guesses. The user is given feedback in the form of hotter or colder. The task itself was relatively straightforward after discovering how to read input from the console:

GuessMe := Object clone do(
    min ::= 1
    max ::= 100
    guessLimit ::= 10
    value ::= nil
    lastGuess ::= nil
    guessesLeft ::= 0

    start := method(
        value = Random value(min, max+1) floor
        guessesLeft = guessLimit

    reset := method(
        value = nil
        lastGuess = nil
        guessesLeft = 0

    guess := method(guess,
        guessesLeft = guessesLeft - 1
        oldGuess := lastGuess
        lastGuess = guess
        if (guess == value) then (return "Done")
        if (oldGuess == nil) then (return "Guess again")
        lastDelta := (value-oldGuess) abs
        delta := (value-guess) abs
        if (delta == lastDelta) then (
            return "Same temperature"
        ) elseif (delta < lastDelta) then (
            return "Hotter"
        ) else (
            return "Colder"

    interactive := method(
        if (value == nil) then (start)
        stdin := File standardInput
        while (guessesLeft > 0,
            in := stdin readLine asNumber
            result := guess(in)
            result println
            if (result == "Done") then (break)
        if (guessesLeft <= 0) then (
            ("No more guesses, the value was " .. value) println

If you want to try it, save the above in and then run the interpreter in the same directory and use the command GuessMe clone interactive.

So what really intrigued me about this assignment was not the exercise itself but the resulting game. It gets obvious pretty fast that the standard binary search solution for guessing games would not work; we need to tweak it somehow. (This is probably an elementary exercise in an algorithms course but it has been a while so it was a fun exercise for me.)

The solution comes from literally thinking outside the box. That is, you need to realize that you can guess numbers outside the range of possible values. So, let’s say that you start by guessing 1 and that’s not it. What should your next guess be? At this point you know the value is between 2 and 100 inclusive, i.e. [2,100]. You’d like to cut your range in half, that is, determine if the value is in the range [2,51] or the range [52,100]. What’s the proper guess?

We want to make our next guess further away from 51 than 1 is. At the same time, it should be closer to 52 than 1 is. In this case the guess is 102, since 51 - 1 = 50 = 102 - 52. Now if the result is hotter, the new range is [52,100]. If the result of the guess is colder, the new range is [2,51].

Now that we’ve seen how we can cut our options in half on every guess (following the first), we can start to create an algorithm. Suppose the range is [x,y] and the last guess is L. First, we find the bifurcation point a = floor((x+y)/2). So we will end up with either [x,a] or [a+1,y].

What should the guess g be? If L is less than a then g will be greater than a and we want g - (a+1) = a - L. This becomes g = 2a - L + 1. If L was greater than a then we want a - g = L - (a+1) which also reduces to g = 2a - L + 1.

Now we have enough to write a prototype to solve our game:

Guesser := Object clone do(
    min ::= 0
    max ::= 0
    guess ::= 0
    result ::= nil

    report := method(
        "Guessed #{guess}: #{result}; [#{min},#{max}]" interpolate println

    guessIt := method(guessMe,
        guessMe start
        min = guessMe min
        max = guessMe max

        // Get through the first guess, it is special
        guess = min
        result = guessMe guess(guess)
        // Assume min wasn't it, no harm if it was
        last := min
        min = min + 1

        while (result != "Done",
            if (guessMe guessesLeft <= 0) then (
                "No guesses left, the value was #{guessMe value}" interpolate println
            avg := ((min+max)/2) floor
            guess = if(min==max, min, 2*avg - last + 1)
            result = guessMe guess(guess)
            if (result == "Colder") then (
                if (guess < last) then (
                    min = avg + 1
                ) else ( // last < guess
                    max = avg
            ) elseif (result == "Hotter") then (
                if (guess < last) then (
                    max = avg
                ) else ( // last < guess
                    min = avg + 1
            ) elseif (result != "Done") then (
                "Did not understand result" println
            last = guess

Here’s the Guesser in action:

Io> gm := GuessMe clone
==>  GuessMe_0xa18e3b8:

Io> guesser := Guesser clone
==>  Guesser_0xa1756d8:

Io> guesser guessIt(gm)
Guessed 1: Guess again; [2,100]
Guessed 102: Colder; [2,51]
Guessed -49: Hotter; [2,26]
Guessed 78: Hotter; [15,26]
Guessed -37: Colder; [21,26]
Guessed 84: Hotter; [24,26]
Guessed -33: Hotter; [24,25]
Guessed 82: Colder; [24,24]
Guessed 24: Done; [24,24]
==> 24

If you enjoyed this post, here are some exercises you may want to think about:

  1. What happens if we start our algorithm with 50 instead of 1?
  2. Will the algorithm always be able to find the value within 10 guesses (when the range is [1,100]? What’s the largest range it can guarantee finding the answer over given 10 guesses? Given n guesses?
  3. Can the algorithm be improved upon?
  4. What if the game told you if you were hotter or colder than all of your previous guesses? How would you change the algorithm to use less guesses?

September 26, 2012

OS-specific ANT properties

The ANT build tool for Java does a pretty decent job of abstracting away OS concerns from your build script. E.g., file paths can always be represented using the / separator and there are tasks for all the typical file system and build operations.

However, once in while you may find yourself in a situation where you need ANT to behave differently based on the operating system. In my case, I needed to specify path to the dot executable within graphviz, a graph drawing tool used by the Hibernate tools ANT package. Rather than torture my environment, I figured I would set a property based on the OS:

<target name="schema-doc">
    <property name=""
              value="-Gsplines=true -Edecorate" />
    <condition property="" value="/usr/bin/fdp">
        <os family="unix" />
    <condition property=""
        <os family="windows" />
    <mkdir dir="${}/doc" />
        <fileset dir="${}/doc" />
    <hibernatetool destdir="${}/doc">
        <configuration configurationfile="${basedir}/hibernate-tool.cfg.xml">
            <fileset dir="${src}" includes="**/*.hbm.xml" />
        <classpath refid="hibernate.classpath" />
            <property key="dot.executable"
                      value="${} ${}" />

The key portion here occurs near the top, using the <condition> directive. Here I’ve placed in inside the <target>, but you can use it outside of a <target> as well. The <os> element within the <condition> allows you test based on properties of the underlying operating system. I’ve chosen to base the test on family, but there are also name, version and arch tests as well.

(As a bonus tip here, I’ve also shown you how to pass extra arguments to graphviz when running it within Hibernate Tools.)

Now this is all well and good for one property, which is the situation I was dealing with, but what if you have a whole mess of properties to deal with? Making multiple <condition> tags for each property and OS combination will get old real fast. In that case, we use the built-in properties ANT supplies:

<property file="build-${}-${os.version}-${os.arch}.properties" />
<property file="build-${}-${os.version}.properties" />
<property file="build-${}.properties" />
<property file="" />

Note the order here. Recall that once a property is defined within ANT it cannot be changed. So put the defaults in and then override them in the more specific properties files that are loaded first. Of course, you may not need to go all the way to the OS architecture level, but now you know how.

September 22, 2012

Io Gotcha

As you are probably aware, I am working my way through Seven Languages in Seven Days by Bruce Tate. (And if you have ever googled basic questions on the Io language, you will know that I am not the first person to have this idea.) In any case, I am on Day of Io, but before I get to anything specific there, I wanted to share a gotcha of Io that I encountered.

Coming from an object-oriented background (like Java) you might find yourself writing code like the following:

Gotcha := Object clone do(
    conspirators ::= list()

    conspire := method(c,
        conspirators push(c)
        return self

walter := Gotcha clone
walter conspire("jesse")
("walter: " .. walter conspirators) println

gus := Gotcha clone
gus conspire("mike")
gus conspire("victor")

("walter: " .. walter conspirators) println
("   gus: " .. gus conspirators) println

Everything seems fine, we initialize a list and then start adding elements to it. But here is the output:

walter: list(jesse)
walter: list(jesse, mike, victor)
   gus: list(jesse, mike, victor)

Somehow in the process of creating gus and adding his conspirators has caused the list of conspirators for walt to grow. What is happening here is that conspirators is a slot on Gotcha that is never overridden by the clones walt and gus. So they are all sharing the same conspirator list. (Fans of Breaking Bad will realize that this situation cannot be allowed!)

The solution (well, one solution, there are probably others) is to use the init method to set the conspirators slot:

Fixed := Object clone do(
    conspirators ::= nil

    init := method(

    conspire := method(c,
        conspirators push(c)
        return self

walter := Fixed clone
walter conspire("jesse")
("walter: " .. walter conspirators) println

gus := Fixed clone
gus conspire("mike")
gus conspire("victor")

("walter: " .. walter conspirators) println
("   gus: " .. gus conspirators) println

Now walt and gus maintain a separate list of conspirators (as Vince Gilligan intended):

walter: list(jesse)
walter: list(jesse)
   gus: list(mike, victor)

If you find yourself making these kinds of gaffes, re-read the Io style guide at

September 16, 2012

Well I am back to reading Seven Languages in Seven Days by Bruce Tate and am taking on the chapter on Io. If you are not familiar, Io is a prototype-based language like JavaScript. Since I typically work on the server-side and only dabble in JavaScript and HTML, I am looking forward to seeing how learning Io can reflect on my knowledge of JavaScript.

The first thing to grab my attention is how slots on clones are handled. You’ll notice that the Car created from the Vehicle clone does not have a description slot listed when the slotNames message is sent to it. Also, Tate indicates that when you send the description message to Car, the message is forwarded to the prototype, Vehicle. Let’s see how that shakes out:

Io 20110905
Io> Vehicle := Object clone
==>  Vehicle_0x9be5758:
  type             = "Vehicle"

Io> Vehicle description := "Something to take you far away"
==> Something to take you far away
Io> Vehicle slotNames
==> list(description, type)
Io> Car := Vehicle clone
==>  Car_0x9c35590:
  type             = "Car"

Io> Car slotNames
==> list(type)
Io> Car description
==> Something to take you far away
Io> Vehicle description = "Something that can move you"
==> Something that can move you
Io> Car description
==> Something that can move you

Interesting, changing the description slot on Vehicle is reflected when the description message is sent to Car. But apparently it can be overridden:

Io> Car description = "Something else entirely"
==> Something else entirely
Io> Car description
==> Something else entirely
Io> Vehicle description
==> Something that can move you

Interestingly you can use the weaker = assignment even though in one sense the description slot had not been defined on Car.

Here’s another question: can we clone non-types and what is the behavior? In turns out the behavior is pretty much the same, except that the prototype is listed as the prototype of the cloned object:

Io> anotherFerrari := ferrari clone
==>  Car_0x9b61418:

Io> ferrari slotNames
==> list()
Io> ferrari color := "red"
==> red
Io> ferrari color
==> red
Io> anotherFerrari color
==> red
Io> Car color

  Exception: Car does not respond to 'color'
  Car color                            Command Line 1

Io> anotherFerrari proto
==>  Car_0x9cb92d0:
  color            = "red"

Moving on to the exercises, most are straightforward. However, following my nose lead me to an interesting place when trying to execute the code in a slot given its name.

Io 20110905
Io> x := Object clone
==>  Object_0x9f878b0:

Io> x yzzy := method("plugh" println; return self)
==> method(
    "plugh" println; return self
Io> x yzzy
==>  Object_0x9f878b0:
  yzzy             = method(...)

Io> x getSlot("yzzy")
==> method(
    "plugh" println; return self
Io> x getSlot("yzzy") type
==> Block
Io> x getSlot("yzzy") call
==>  Object_0x9f13028:
  Lobby            = Object_0x9f13028
  Protos           = Object_0x9f12f58
  _                = Object_0x9f13028
  exit             = method(...)
  forward          = method(...)
  set_             = method(...)
  x                = Object_0x9f878b0

Io> x perform("yzzy")
==>  Object_0x9f878b0:
  yzzy             = method(...)

Initially I tried to get to the code via getSlot. While this worked, I ended up with a Block and then tried sending the call message to it. The code was executed, but the right thing was not returned. Somehow I ended up with the Lobby being returned instead of x. It turned out the better approach was to use the perform method on Object. Now the correct value is returned.

September 3, 2012

Absent Code

I recently hit an error I had never seen before:

Caused by: java.lang.ClassFormatError: Absent Code attribute in method that is
        not native or abstract in class file javax/servlet/http/HttpCookie

A little research and I discovered that some versions of the javaee.jar contain essentially just the method signatures without any bodies. This is fine for compiling against but can cause issues when running JUnit, i.e. unit tests.

The solution turned out to be to use a JEE jar from an application server. I use JBoss, but they seemed to have the jar scattered all over the place so I turned to GlassFish. GlassFish didn’t exactly give me one-stop shopping either but it was easier to put together the implementation jar than with JBoss.

If you don’t need the jar in any particular place you can just add it to your classpath. Otherwise if you are moving it, note that the jar itself contains nothing but a manifest pointing to other jars. Make sure you copy the other jars and preserve the directory structure.

Big hat tip to mkyong at who did the leg work for me on this one:

August 30, 2012

Using ANT Hibernate Tools with Hibernate 4

Recently I have been upgrading a JEE application to the latest versions of the libraries used. In particular I was upgrading from Hibernate 3 to Hibernate 4.

In this particular application, we maintained Hibernate mapping files and used Hibernate Tools to generate the schema via an ANT task. At first glance, it seemed that Hibernate Tools had been completely absorbed into JBoss Tools for Eclipse. It also appeared that Hibernate Tools did not support Hibernate 4. However, it turned out I was able to get what I needed.

So the first trick was locating Hibernate Tools. From the download page for JBoss Tools (, drill into the download link at the bottom for the version of JBoss Core Tools you want. In my case the latest was 3.3. On this page you will find separate downloads for JBoss Tools and Hibernate Tools. I downloaded the zip for Hibernate Tools.

From here you will find that the download consists of plugins for Eclipse. But if we tear apart the right one we can get to the hibernate-tools.jar file needed for the ANT task. In this case the right jar was plugins/

After exploding the jar I found the hibernate-tools.jar in the lib/tools subdirectory. Unfortunately my fun was not over. I soon ran into this problem:

java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory

I needed to include the jars from the lib/required directory as well. But we were still not quite there:

[hibernatetool] SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
[hibernatetool] SLF4J: See for further details.

c:\eng\projects\sc\build.xml:83: java.lang.NoClassDefFoundError: org/slf4j/impl/StaticLoggerBinder

Luckily the error message at the indicated URL was informative. I downloaded a implementation of slf4j and included the slf4f-nop.jar in the classpath. (Note that using the slf4j-log4j jar from the lib directory caused all the useful output from the hibernatetool ANT task to be suppressed. And I did not want to set up a log4j configuration just for this. Have I mentioned how frustrating all these logging frameworks are?)

As a bonus, here’s one more issue I encountered while upgrading Hibernate Tools:

org.hibernate.MappingException: Could not determine type for: org.jasypt.hibernate4.type.EncryptedStringType

Hibernate Tools was having issues with the properties configured to be encrypted via jasypt. (An excellent way to include encrypted data into a database transparently, by the way.) The trick turned out to be to define the sql-type attribute on the column element under property element in the mapping file.

In case you were wondering, I was able to get Hibernate Tools to generate POJOs, documentation and mapping files. I’m not using anything specific to Hibernate 4 so I don’t know if I can declare Hibernate Tools completely compatible with it, but it seems a great deal of functionality is available, if you work a little.

August 6, 2012

Ruby Play List Copier, Take 2

I finally got back to my little Ruby project over the weekend. The idea was to write a tool to copy an m3u play list and associated files to my mp3 player since Rhythmbox and Banshee were not up to the task. I used the ruby-taglib library from to access mp3 tags.

My first attempt was turning out a little too much like an enterprisey Java project so I decided to back up and try to make it a little lighter and more Ruby-esque. I decided on a module for parsing play lists would allow for the best re-use for that functionality while simple classes would represent play lists and play list entries. With the library written the main script became the following:

#!/usr/bin/env ruby
require 'fileutils'
require './playlist-parser'

dest_dir = File::expand_path(ARGV[0])
source =[1])
dest =, File::basename(source.to_s)))

source.read_playlist do |entry|
  basename = PlayListEntry::sanitize(entry.artist + ' - ' + entry.album +
    ' - ' + entry.track + ' - ' + entry.title + '.mp3')
  dest_entry =
  dest.playlist_entries << dest_entry
  dest_file = File::join(dest_dir, dest_entry.to_s)
  if not File::exists?(dest_file) then
    puts "#{entry.source} => #{dest_file}"
    FileUtils.copy_file(entry.source, dest_file)
    puts "#{dest_file} exists"


This script is pretty simple. It opens the given play list and iterates over the entries, creating a new play list based on the passed destination directory. The file is copied over as Rhythmbox and Banshee do, using the tag information to determine the file name. Then when we are done we write out the new play list.

The library file is little longer. It includes a module named PlayListParser which had the parsing functionality (such as it is, a play list file is not really very complicated; if you are reading this far open one up in a text editor and you’ll figure it out no problem). Then we have the PlayList class which includes the parser module and provides a write_playlist method. Finally the PlayListEntry which makes tag access convenient.

require 'taglib'

module PlayListParser

  attr_accessor :playlist, :playlist_entries

  def parse_playlist(playlist, &block)
    @playlist = playlist
    @playlist_entries = []
    save_dir = Dir::pwd
    Dir::chdir(File::dirname(playlist)) do |file|
      file.readlines.each do |line|
        line = line.strip
        if line.empty? or line[0] == '#' then
        if not File.exists?(line) then
          puts "WARN: File #{line} does not exist in play list #{@playlist}"
        entry =
        @playlist_entries << entry unless block == nil


class PlayList

  include PlayListParser

  def initialize(playlist)
    @playlist = playlist
    @playlist_entries = []

  def read_playlist(&block)
    parse_playlist(playlist, &block)

  def write_playlist
    File::open(playlist, 'w') do |file|

  def to_s
    return @playlist.to_s

class PlayListEntry

  def self.pad_track(track)
    return ( track < 10 ? '0' + track.to_s : track.to_s)

  def self.sanitize(source)
    return source.gsub(/[":\?]/, '_')

  attr_accessor :source, :album, :artist, :comment, :genre, :title, :track, :year

  def initialize(source)
    @source = source

  def read_tags
    if File.exists?(source) then do |fileref|
        tag = fileref.tag
        @album = tag.album
        @artist = tag.artist
        @comment = tag.comment
        @genre = tag.genre
        @title = tag.title
        @track = PlayListEntry::pad_track(tag.track)
        @year = tag.year unless tag.year == 0

  def to_s
    return @source.to_s

One drawback of the parser is the use of the current working directory to handle relative paths in the play list file. This construct makes the parse_playlist method not thread-safe. (I can’t help but think about these things after working on servers; but I left it that way since this is supposed to be a simple script.)

In the end I learned a few useful things along the way, like the difference between sub and gsub as well as some of the characters that are escaped by Rhythmbox and Banshee when making file names. Also how to split up a Ruby project into more than one file. And I ended up with something I can actually use. All in all a successful excursion into Ruby.

August 2, 2012

Ruby Play List Copier, Take 1

So I finally got back to my Ruby play list project. The next mini-goal would be to parse a play list file and print out the converted file names. I created a PlayListEntry class and a PlayList class and things were moving along very well:

require 'taglib'

class PlayListEntry

  SEPARATOR = " - "
  SUFFIX = ".mp3"

  attr_accessor :source, :dest, :dest_dir

  def initialize(source)
    @source = source

  def set_dest_dir(dest_dir)
    @dest_dir = dest_dir

  def determine_dest do |fileref|
      tag = fileref.tag
      if not tag then
        puts "No tags for #{source}"
      basename = tag.artist + SEPARATOR + tag.album + SEPARATOR +
        tag.track.to_s + SEPARATOR + tag.title + SUFFIX
      if dest_dir then
        @dest = File.join(dest_dir, basename)
        @dest = basename


class PlayList

  attr_accessor :playlist, :entries

  def initialize(playlist)
    @playlist = playlist

  def read_playlist
    @entries = [] do |file|
      file.readlines.each do |line|
        line = line.strip
        if line.empty? or line[0] == '#' then
        if File.exists?(line) then
          @entries <<
          puts "File #{line} does not exist"



playlist =[0])
playlist.entries.each {|x| puts x.dest }

That’s when I realized I am pretty much just doing Java style code in Ruby. I need to change this up and try to make it more Ruby-like. Possibilities include using modules or code blocks. We’ll see where it goes.