Lab Notes

Things I want to remember how to do.

JAX-RS Microservices

August 19, 2017

An example project where we have taken a Java library (java-xirr) and expose it as a REST service using JAX-RS. Then the service is dockerized using the maven dockerfile plugin. On the client side, we create a separate client using the Node.js connect server in order to illustrate various issues with CORS when utilizing REST services.

The project is available on GitHub as rest-xirr.

The REST service

Configuration

Turning a library into a REST service using JAX-RS in JEE 7 is pretty easy using the @ApplicationPath, @Path, @GET and @POST annotations, This has been covered extensively elsewhere so I won’t go into details here.

We want our REST service to use JSON to interact with the client, so we add @Consumes and @Produces annotations to our XirrService.xirr() method (full source):

    @POST
    @Consumes(MediaType.APPLICATION_JSON)
    @Produces(MediaType.APPLICATION_JSON)
    public XirrResult xirr(TxRecord[] records) {
        try {
            // Convert TxRecords into Transactions
            final List<Transaction> tx = Stream.of(records)
                .map(TxRecord::toTransaction)
                .collect(Collectors.toList());
            final double xirr = new Xirr(tx).xirr();
            // Wrap result in result object
            return new XirrResult(xirr);
        } catch (IllegalArgumentException iae) {
            // Convert IAEs thrown by Xirr into ServiceExceptions for the
            // exception mapper
            throw new ServiceException(iae);
        }
    }

The JAX-RS implementation will handle consuming the JSON and converting it to an array of TxRecord instances. It will also convert the return value, XirrResult, into JSON for the client.

JSON Details

Let’s drill into the rest of the xirr() method. First, you may notice that we are using a new class, TxRecord for the input instead of the Transaction class provided by the xirr library. The reason for this is that the JSON deserializer provided by JAX-RS will not work with immutable classes. So TxRecord is a mutable wrapper around Transaction.

For the return value, we could have just returned a double, and that would have been fine, but instead we created a wrapper object to see the JSON serialization in action.

Error Handling

Finally you might notice we are catching the IllegalArgumentExceptions throws by the xirr library and converting them into a custom exception, ServiceException. This allows us to define an ExceptionMapper to handle the errors (full source):

@Provider
@Singleton
public class ServiceExceptionMapper implements ExceptionMapper<ServiceException> {

    @Override
    public Response toResponse(ServiceException exception) {
        // Send the exception details to the client
        // A production version would probably use a custom error object
        return Response.status(Response.Status.INTERNAL_SERVER_ERROR)
            .entity(exception)
            .build();
    }

}

The result is that when an IllegalArgumentException is thrown by the xirr library, it is converted in a ServiceException by the XirrService and then the above ExceptionMapper implementation kicks in. So instead of the default HTML page served by WildFly when an exception is thrown, the client gets the JSON encoding of the exception.

Originally I had the ExceptionMapper implementation extending ExceptionMapper<Exception>, but that ended up being too broad. When the client was sending the CORS preflight check (OPTIONS HTTP method), WildFly was throwing an exception in the process of generating the default OPTIONS reply required by the JAX-RS specfication. Then there was an additional issue with the mapping and the end result was the preflight check always failed and the following stacktrace emitted:

13:50:06,423 ERROR [io.undertow.request] (default task-52) UT005023: Exception handling request to /xirr: org.jboss.resteasy.spi.UnhandledException: org.jboss.resteasy.core.NoMessageBodyWriterFoundFailure: Could not find MessageBodyWriter for response object of type: org.jboss.resteasy.spi.DefaultOptionsMethodException of media type: application/octet-stream
	at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:187)
	at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:206)
	at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:221)
	at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:56)
	at org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.service(HttpServletDispatcher.java:51)
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
// trimmed here for brevity
Caused by: org.jboss.resteasy.core.NoMessageBodyWriterFoundFailure: Could not find MessageBodyWriter for response object of type: org.jboss.resteasy.spi.DefaultOptionsMethodException of media type: application/octet-stream
	at org.jboss.resteasy.core.ServerResponseWriter.writeNomapResponse(ServerResponseWriter.java:66)
	at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:183)
	... 42 more

Dockerization

Finally what microservice would be complete without a Docker image? The Dockerfile is simple and uses WildFly as the application server, but there is no reason any other compliant server could not be used (full source):

FROM jboss/wildfly:latest
ADD target/*.war /opt/jboss/wildfly/standalone/deployments/

The dockerfile maven plugin from Spotify has been integrated in order to convert the Dockerfile into an image. If you want to build and run a Docker container for the service:

$ mvn clean install dockerfile:build
$ docker run -p 8080:8080 --name xirr-rest decampo/xirr-rest:1.0.0-SNAPSHOT

Note that in order for the dockerfile plugin to work you must have Docker listening on port 2375 over TCP. If this is an issue of course the Docker image may still be built with the docker command as long as you have built the xirr.war first using Maven.

Configuring Docker on Fedora

This should probably be a separate post, but very quickly, let me describe how to configure Docker on Fedora to listen on port 2375. This should not be done lightly however as it is a local system privlege escalation risk.

First, create or edit the /etc/docker/daemon.json file to contain:

{
    "hosts": ["unix:///var/run/docker.sock", "tcp://127.0.0.1:2375"]
}

Then in bash:

# Create a docker group
$ sudo groupadd docker
# Add yourself to the group
$ sudo gpasswd -a ${USER} docker
# Restart the docker service
$ sudo systemctl restart docker
# Refresh your groups (or log out and log back in for GUI applications)
$ newgrp docker

Testing with curl

Once the container is up and running we can test it with curl:

# Test the ping service:
$ curl http://localhost:8080/xirr

# Test the xirr service:
$ curl -H "Content-type: application/json" -X POST \
       -d '[ { "amount":-1000, "when":"2017-01-01" }, 
             { "amount": 1100, "when":"2018-01-01" } ]' \
       http://localhost:8080/xirr/

# Test the xirr service and force an error condition:
$ curl -H "Content-type: application/json" -X POST \
       -d '[ { "amount": 1000, "when":"2017-01-01" }, 
             { "amount": 1100, "when":"2018-01-01" } ]' \
       http://localhost:8080/xirr/

REST Client

We could have built the client inside the war for the service, but that’s not a realistic scenario for actual deployment. So instead we build a client that is served from a separate server and tries to contact the xirr REST service directly. That puts us directly into the crosshairs of Cross-Origin Resource Sharing, aka CORS.

CORS HTTP Headers

Normally a web browser will not allow JavaScript code to connect to a server that did not serve that code for security reasons. CORS is a scheme contocted by the browser makers to allow a server to indicate that it can accept connections from other domains. Keep in mind that CORS is enforced on the client and thus you can’t really rely on it to supply security for the server.

To allow arbitrary clients to connect to our xirr REST service, we need for it to satisfy a couple of requirements. First, it should respond properly to requests made via the HTTP OPTIONS method. As noted above, the JAX-RS specification requires the implementation to generate a proper response for OPTIONS requests if the application does not specify a different one. So we are fine in that respect.

Second, the service needs to set a number of HTTP headers to specify the required security settings to the client. For this we define a JAX-RS response filter which will be applied to every response from our service (full source):

@Provider
public class CorsHeadersFilter implements ContainerResponseFilter {

    @Override
    public void filter(
        final ContainerRequestContext requestContext,
        final ContainerResponseContext responseContext) throws IOException {
        final MultivaluedMap<String, Object> headers =
            responseContext.getHeaders();
        // Following allows for all clients
        headers.add("Access-Control-Allow-Origin", "*");
        headers.add("Access-Control-Allow-Headers", ALLOWED_HEADERS);
        headers.add("Access-Control-Allow-Credentials", "true");
        headers.add("Access-Control-Allow-Methods", ALLOWED_METHODS);
        // Set the length of time in seconds the preflight check may be cached
        // Use 1 second here for development purposes
        headers.add("Access-Control-Max-Age", "1");
    }
}

The @Provider annotation indicates to JAX-RS that this should be used whenever a ContainerResponseFilter is needed.

In this case ALLOWED_HEADERS is "Authorization, Content-Type". You may include any header you want, including custom headers. If a header is included in the request that is not whitelisted, the request is disallowed (by the browser). A list of automatically whitelisted headers is available at https://fetch.spec.whatwg.org/#cors-safelisted-request-header.

For ALLOWED_METHODS, I’ve used "DELETE, GET, HEAD, OPTIONS, POST, PUT".

We can test out the CORS configuration using curl:

# Test preflight request
$ curl -i -X OPTIONS http://localhost:8080/xirr

# Test headers on normal request:
$ curl -D - http://localhost:8080/xirr

Simple Node.js web server

For hosting the REST client, we use a simple Node.js web server, connect and run it using grunt via the grunt-contrib-connect plugin (full source of Gruntfile.js):

module.exports = function(grunt) {
    'use strict';
    
    grunt.loadNpmTasks('grunt-contrib-connect');
    
    grunt.initConfig({});

    // Simple HTTP server to host the xirr client
    // Runs on port 8000 by default
    grunt.config('connect', {
        options: {
            base: 'www',
            keepalive: true
        },
        'default': {}
    });
};

Then the client server (ugh, server for the client?) can be started with the command grunt connect. By using the keepalive option the server will continue to run.

JavaScript REST Client

For the JavaScript REST Client an ES6 class is created, XirrClient. A class method is created corresponding to the two service methods on the XirrService Java class, ping() and xirr().

The ping() method uses fetch() (see Using Fetch from MDN for more details) in it’s simplest form (full source):

    ping() {
        return fetch('http://localhost:8080/xirr').then(function(response) {
            console.log(response);
            return response.text();
        });
    }

The fetch() method returns a Promise object with a Response payload. The ping() method converts that into a Promise with a string payload for the caller.

The xirr() method uses fetch() in a more complicated way, since we need to deliver a JSON payload via POST for the xirr REST endpoint (full source):

    xirr() {
        return fetch('http://localhost:8080/xirr', {
            method: 'POST',
            headers: {'Content-Type': 'application/json'}, 
            body: JSON.stringify(this.txs)
        }).then(function(response) {
            console.log(response);
            return response.json();
        });
    }

Note that using this form of fetch() allows us to send custom headers, including, if desired, an Authentication header for Open ID.

Another interesting aspect of this is the Promise returned by fetch() does not reject based on the HTTP status code of the response. In other words, when status code 500 is returned by our xirr REST service because of invalid input, we still end up in the normal success handler.

In this case we convert the Promise payload into the parsed JSON object returned by the server. So the xirr() method will return an object with a xirr property on success and a message property (from the serialized exception) when the xirr throws an IllegalArgumentException.

You can see the client in action in the index.js file (full source):

    client.clear()
      .add(-1000, "2017-01-01")
      .add( 1100, "2018-01-01")
      .xirr().then(function(result) {
        console.log(result);
        document.getElementById('output').innerHTML = 
            result.xirr ? formatter.format(result.xirr) : result.message;
    }).catch(errorHandler);

Summary

So, we have covered quite a bit here. Probably should have been a couple of blog posts, but in the end everything is so cross-referenced I felt it works better as a whole.

We wrapped a Java library with REST using JAX-RS and then dockerized the result. The REST service was configured to allow for CORS requests. Then we created a client from browser-based JavaScript land which consumes the service.

Resources

Read More

All Posts

Connect to Microsoft VPN from Ubuntu 16.04 Xenial Xerus

August 21, 2016

I recently upgraded from Ubuntu 14.04 on my main desktop machine and discovered that my VPN connection to Windows 2008 Server no longer worked. The bugaboo turned out to be the routing table which no longer requires a gateway entry. Here I have rewritten my post originally written for 12.10 to reflect the new configuration.

Read More

Complicated String Joins

April 4, 2016

So if you have not been under a rock and have used Java 8, you are surely aware of the new String.join() method and the Collectors.joining() method to concatenate arrays or streams of Strings. Sometimes however, a simple concatenation with a delimiter is not quite up to the job.

Read More

Creating a custom Collector

March 28, 2016

Creating a custom implementation of java.stream.Collector seems daunting at first, but once you give it a try, you'll see that it can actually be pretty easy.

If you are not used to using lambdas and functional concepts, your first look at the Collector interface will be intimidating. From a pre-Java 8 perspective, there are four interfaces to implement to create a custom Collector implementation: java.util.function.BiConsumer, java.util.function.BinaryOperator, java.util.function.Function and java.util.function.Supplier. Fortunately they are all functional interfaces which will allow us to take some shortcuts with lambdas and functional expressions.

The example I chose to implement is a Collector for SetValuedMap from Apache's Common Collections project. We'd like a static method similar to the standard Collectors.toMap() method which will generate a Collector instance yielding a SetValuedMap implementation.

There are a lot of moving parts to the Collector in terms of generics. First, we will have generic parameters for the type in the existing stream <T>, the type of the key in the map <K> and the type of the values in the map <V>. Then we need to identify the three generic parameters to the Collector interface: <T> is the same as before, the type of objects in the stream; <A>, the accumulation type will be SetValuedMap<K,V>; and <R>, the result type, will also be SetValuedMap<K,V>. It is frequently the case with Collectors that <A> and <R> are the same.

Now that we have our generic ducks in a row, we can start figuring out our Collector implementation. For the supplier, we can use a constructor of a SetValuedMap implementation, e.g. HashSetValuedHashMap::new.

The accumulator will be a lambda function taking in a map and a stream object, it will need to put the stream object in the map. For that we will need to pass in functions converting the stream objects to keys and values respectively (just like in Collectors.toMap()).

The combiner will need to accept two maps and return a map containing entries from both. Again this can be specified with a lambda function.

We don't need anything beyond the identity function for the finisher, which means we ready to create our Collector implementation using the Collector.of() factory method:

This method can be used to create a version of Collectors.groupingBy() which eliminates duplicates simply by passing Function.identity for the valueMapper:

If we wanted to get really fancy we could also pass in a Comparator to use with our set, but I will leave that as an exercise for the reader.

See the file MoreCollectorsTest.java for some examples of these methods in action.

Read More

jQuery UI Initialization Issues

February 14, 2015

Every once in a while when using jQuery UI I notice that my widget is not being initialized smoothly. For example, when using the menu widget I might see the list items behind the menu briefly. Then the page jumps around and it is a generally unpleasant experience for the user. Who likes it when your jQuery widgets are jumpy and cause the page to feel like one of those old 'punch the monkey' banner ads?

Read More

WildFly 8.2.0 and Java 8 on CentOS 6.6

January 7, 2015

This post documents how to install and configure WildFly 8.2.0 on CentOS 6.6. We make some changes to move some of the configuration under /etc and also place log files under /var/log and temp files under /tmp just like a well-behaved POSIX application should.

We’ll start by installing Java; download your desired version as an RPM from Oracle and install:

Read More

Network issues with CentOS 6.x under Hyper-V

January 6, 2015

I had a number of CentOS instances running under Hyper-V using the standard virtual network interface via integration services. Post CentOS 6.3 integration services are included in the install and that has been my experience. However at some point after updating to 6.6, something causes the integration services to no longer work with the virtual network interface.

To fix, remove the Network Adapter from the Hyper-V configuration and replace it with a Legacy Network Adapter.

Now you need to boot the virtual machine and access it from the Hyper-V console in order to log in. Backup the file /etc/udev/rules/70.persisent-net.rules and remove it from the /etc/udev/rules directory. This file tells CentOS to look for the old network interface and assign it to eth0.

Second, backup the /etc/sysconfig/network-scripts/ifcfg-eth0 file. Edit the file and remove any reference to UUID or HWADDR.

Read More

Liferay 6.2 Portal on JBoss 7.2.0

November 10, 2013

Liferay is a JSR-286 (also known as Portal 2.0) compliant portal (and a whole lot more). Since I am in the market for a portal server for an upcoming project, I figured I needed to check Liferay out. The folks at Liferay have bundled version 6.2 with a number of different open-source application servers, including JBoss 7.1.1, but what fun would it be to simply download a bundle? Liferay is also available as a war download for deployment on existing (and closed-source) application servers. So let’s see if we can get Liferay running on the JBoss 7.2.0 server we built previously.

Read More

Packaging GateIn JavaScript Modules in Separate WARs

November 3, 2013

I have been messing around with the GateIn Portal Server in order to evaluate it for an upcoming project. One nice aspect of the portal is the way JavaScript is handled. JavaScript in GateIn is split into modules and managed via the RequireJS library. This allows the portlet developer to keep their JavaScript isolated and only include the dependencies they require. It also allows for re-use of modules defined in one portlet in other portlets. It doesn’t take a lot of imagination to picture the disaster JavaScript could become on a portal which doesn’t provide isolation and re-use, especially if multiple organizations are providing portals.

Read More

Building GateIn 3.6.3

October 27, 2013

GateIn is a JSR-286 (also known as Portal 2.0) compatible portal based on the JBoss application server (soon to be known as WildFly). The latest version of GateIn with a provided build is GateIn 3.6.0 and is bundled with JBoss 7.1.1. (There is also a beta build of Red Hat JBoss Portal 6.1.0 which bundles GateIn with JBoss EAP and thus carries a more restrictive license.) Note that GateIn is bundled with the application server and is not an add-on to an existing application server.

Read More

Build and Install JBoss 7.2.0 on CentOS 6.4

October 20, 2013

As you may or may not be aware, a great many changes have been happening to the JBoss application server since being purchased by Red Hat. Most importantly, the application server is being rebranded as WildFly for version 8. ("Why?" is the first entry on the WildFly FAQ if you are curious.) But since WildFly is not quite ready as of this writing (but looks real close), we are going to deal with the latest community release, JBoss AS 7.2.0.

Read More

Trac Plugins vs Python Versions

September 20, 2013
Just a quick and dirty reminder than when you are trying to make Trac plugins work, you need to make sure you are using the correct python version of the egg.  For example, I recently wanted to add the RegexLinkPlugin to one of my Trac instances.  However, when I downloaded the provided egg and installed it, nothing happened.  Upon further inspection, I realized that the egg was for Python 2.5 and I was running 2.6 on my server.  I downloaded the source and recompiled and was rewarded with a (working) Python 2.6 egg.

By the way, the RegexLinkPlugin is a nice way to map wiki text to links in your Trac installation.  For example, I am using it on an IT wiki where each server has its own page but the servers are all referred to by all lowercase hostnames.  Using RegexLinkPlugin I can map each hostname to link to the page for the server with no extra wiki markup, ensuring my users always generate the links when using the hostnames.
Read More

Joining a CentOS server to Active Directory

September 8, 2013

As the number of CentOS (or Red Hat) machines in your environment grows, you begin to appreciate the need for a central login mechanism. Most workplaces already have a such a login for their Windows workstations in the form of an Active Directory domain. By joining your CentOS machines to the Active Directory domain, you allow users to login with the same credentials as on their Windows machines. Furthermore you do not need to add or remove users when new people join the team or others drop off the team.

Read More

Additional AJP connectors within SELinux environment

June 9, 2013

I recently went through the exercise of adding an additional JBoss application server to a production CentOS 6.4 server. The two applications were to be hosted on the same machine using virtual name servers to distinguish requests. I have covered virtual name servers before in my post on install Trac on CentOS 6. Multiple instances of JBoss can be made to play nice on the same servers by shifting the ports via the switch ‑Djboss.service.binding.set=ports‑01.

Read More

Development Mail Server

March 24, 2013

If you have ever needed to develop a web application you have probably needed to send email from the application. Frequently in the applications I work on, we end up using the email address as the login, which means we need lots of email addresses for unit test and integration testing, etc. Using real email addresses for this is not very convenient, so we have used a separate development mail server. Ideally this mail server will accept mail from any development machine, but will not relay mail to any real addresses (so it won’t be an open relay).

Read More

Trac on CentOS 6.3, Part 1

January 20, 2013

Recently I had the need to set up a Trac instance on a 64-bit machine running CentOS 6.3. For CentOS and Red Hat 5, someone has done the hard work already and set up RPM files (see the Trac documentation on RHEL 5 and Dag Wieers RPM repository for details) making installing Trac as easy as yum install trac. Unfortunately, our benefactors have not gotten to RHEL 6 yet so I needed to do it myself.

Read More

SVN: E200031: attempt to write a readonly database

January 13, 2013

Just a quick and dirty note to solving the following error from Subversion:

SVN: E200031: attempt to write a readonly database.

I found the answer on Tor Henning Uelands blog under the post http://h3x.no/2010/12/04/svn-gives-attempt-to-write-a-readonly-database-error. The solution was to fix the permissions on the db/rep-cache.db file in the subversion repository. See his blog if you need more details. Thanks Tor.

Read More

Nvidia Overscan in Ubuntu 12.10

January 6, 2013

A few weeks ago I upgraded my HTPC to Ubuntu 12.10 and was treated to a nasty surprise: the overscan settings for the nvidia driver were no longer recognized. The HTPC is connected to my television (naturally) which is a 40" LG LCD HDTV. If you have ever tried to connect your PC to an HDTV before, you probably encountered the problem of that the visible portion of the screen is smaller than the drawable portion of the screen. The result is that the edges of the screen are not visible. In my case that meant the dash and the universal menu of Unity could not be seen. That makes for a less than usable experience.

Read More

Ubuntu 12.10: Connect to Microsoft VPN

December 23, 2012

I recently upgraded to Ubuntu 12.10 on my main desktop machine from scratch, which means a number of things which had been installed and configured need to be re-done. One of those things is my VPN connection to work, which runs Windows 2008 Server for VPN.

Read More

Installing Ubuntu 12.10 on an SSD, Part 3

December 9, 2012

Recently I took the plunge and put an SSD drive into my desktop. Since I needed to re-install the OS, I figured I would install the latest Ubuntu, version 12.10. I went over my trials and tribulations of getting the OS installed in part 1, and dealt with swap in part 2. Today we finish up the tweaks for the SSD.

Read More

Installing Ubuntu 12.10 on an SSD, Part 2

December 5, 2012

Recently I took the plunge and put an SSD drive into my desktop. Since I needed to re-install the OS, I figured I would install the latest Ubuntu, version 12.10. I went over my trials and tribulations of getting the OS installed in part 1, today we are going to talk about some changes I made afterwards to support the SSD.

Read More

An Io Guessing Game

September 30, 2012

So when I get the chance I am working through Bruce Tate’s Seven Languages in Seven Weeks (it might end up being seven years in my case) and commenting on it when the mood strikes. I am currently working through the exercises for Io Day 2 and today I was contemplating the final exercise.

Read More

OS-specific ANT properties

September 27, 2012

The ANT build tool for Java does a pretty decent job of abstracting away OS concerns from your build script. E.g., file paths can always be represented using the / separator and there are tasks for all the typical file system and build operations.

Read More

Io Gotcha

September 22, 2012

As you are probably aware, I am working my way through Seven Languages in Seven Days by Bruce Tate. (And if you have ever googled basic questions on the Io language, you will know that I am not the first person to have this idea.) In any case, I am on Day of Io, but before I get to anything specific there, I wanted to share a gotcha of Io that I encountered.

Read More

Well I Am Back To Reading Seven

September 16, 2012

Well I am back to reading Seven Languages in Seven Days by Bruce Tate and am taking on the chapter on Io. If you are not familiar, Io is a prototype-based language like JavaScript. Since I typically work on the server-side and only dabble in JavaScript and HTML, I am looking forward to seeing how learning Io can reflect on my knowledge of JavaScript.

Read More

Ruby Play List Copier, Take 1

August 3, 2012

So I finally got back to my Ruby play list project. The next mini-goal would be to parse a play list file and print out the converted file names. I created a PlayListEntry class and a PlayList class and things were moving along very well:

Read More

Deleting old files on Windows

July 16, 2012

I ran into a situation today where I wanted to script deletion of folders older than a set number days on an old Windows 2000 machine. (The culprit is a commercial SMTP spam and virus filter that does not clean up after itself when it updates. Eventually the drive gets full and no mail comes through.) I found a solution using forfiles but this version of Windows does not have it. I found myself searching the web and gnashing my teeth over the limitations of Windows batch scripting.

Read More

Welcome

July 14, 2012
I created this blog for two purposes.  First, to capture any random notes on problems I have encountered (and hopefully solved).  Hopefully next time it happens, I'll remember the blog post or Google will remember it for me.
Read More