Blog

Undertow Websocket Client in Java SE

Undertow is a web server written in Java by JBoss. It uses a very modular architecture which allows the developer to pick and choose features they need so it fits anywhere from a Java EE web-server (Wildfly) to being embedded in a JavaSE application (the topic of this blog). Actually, this post isn’t about a web server AT ALL! This post is about using Undertow’s web socket library in a Java FX application to allow a server to send real time messages.

I’ve been working on a side project to make managing my Github notifications easier. The general idea is that a server will record my notifications, apply basic filtering on them, and then show me notifications ranked “Low”, “Medium”, and “High” priority. The UI of this application will be provided by Java FX and run standalone on the user’s system. A brief demo can be found on YouTube.

I originally tried the WebSocket implementation by TooTallNate, but they wouldn’t work correctly in my situation. I settled on the Undertow implementation mostly because I work for RedHat and can harass the developers on internal channels; however, this wasn’t needed.

The Undertow page mostly deals with server to server communication and the only WebSocket client example I could find that spoke to my situation was in a single unit test. However, I was able to cargo cult most of a working project, but I need to learn more about Xnio.

WebSocketProvider

The first thing I had to do was configure a XnioWorker. This static block is mostly cargo culture so I would refer to the official Xnio site before putting this into production.

    private static XnioWorker worker;

    static {
        try {
            worker = Xnio.getInstance().createWorker(OptionMap.builder()
                    .set(Options.WORKER_IO_THREADS, 2)
                    .set(Options.CONNECTION_HIGH_WATER, 1000000)
                    .set(Options.CONNECTION_LOW_WATER, 1000000)
                    .set(Options.WORKER_TASK_CORE_THREADS, 30)
                    .set(Options.WORKER_TASK_MAX_THREADS, 30)
                    .set(Options.TCP_NODELAY, true)
                    .set(Options.CORK, true)
                    .getMap());
        } catch (IOException | IllegalArgumentException ex) {
            Logger.getLogger(WebsocketProvider.class.getName()).log(Level.SEVERE, null, ex);
            throw new RuntimeException(ex);
        }
    }

Next, I created a static method to create a WebSocketClient instance and connect to my server. Because I am using self signed certificates as part of testing, I implemented a method to create an appropriate socket.

    public static WebSocketChannel getWebsocketClient(URI serverURI, String bearerToken, EventBus bus) {
        try {


            WebSocketClient.ConnectionBuilder builder = WebSocketClient.connectionBuilder(worker, new DefaultByteBufferPool(false, 2048), serverURI);
            builder.setClientNegotiation(new WebSocketClientNegotiation(null, null){
                @Override
                public void beforeRequest(Map<string , List<String>> headers) {
                    headers.put("Authorization", Lists.newArrayList("bearer " + bearerToken));
                }
            });

            if ("wss".equals(serverURI.getScheme())) {
                setupSSLSocket(builder);
            }

            WebSocketChannel channel = builder.connect().get();
            channel.resumeReceives();

            channel.getReceiveSetter().set(new AbstractReceiveListener() {
                @Override
                protected void onFullTextMessage(WebSocketChannel channel, BufferedTextMessage message) throws IOException {
                    bus.post(UPDATE_NOTIFICATIONS);

                }

                @Override
                protected void onText(WebSocketChannel webSocketChannel, StreamSourceFrameChannel messageChannel) throws IOException {
                    super.onText(webSocketChannel, messageChannel); //To change body of generated methods, choose Tools | Templates.
                    bus.post(UPDATE_NOTIFICATIONS);
                }

                @Override
                protected void onError(WebSocketChannel channel, Throwable error) {
                    super.onError(channel, error);
                    Logger.getLogger(WebsocketProvider.class.getName()).log(Level.SEVERE, error.getMessage(), error);

                }
            });

            return channel;
        } catch (IOException | CancellationException ex) {
            Logger.getLogger(WebsocketProvider.class.getName()).log(Level.SEVERE, null, ex);
            throw new RuntimeException(ex);
        }

    }

There are three pieces of code to pay special attention to channel.resumeReceives();, the WebSocketClientNegotiation implementation and the AbstractReceiveListener implementation. The first is necessary to receive messages from the server (I don’t know why, can someone from Undertow shed some light). The second adds a Bearer token authorization header so the server can authenticate the user with KeyCloak. The last is the actual handler for messages from the server. Currently it posts the message to an EventBus that various components are subscribed to.

There we have it! A very VERY simple websocket client for my Java FX application. If you want to play with it you can find the server and client source codes on my GitHub.

Posted in Blogroll

Reporting Gradle Builds using WebSockets

If you have a build server you might want to receive reporting from your build. Many build bots offer this kind of reporting, but I decided to implement it myself in a standard Gradle build script. I override the default logger and replace it with one that writes all of the logging to a web socket. I have also created a very simple Java EE service which can consume logging messages and rebroadcast them to a different web socket using JMS¹.

Gradle Configuration

buildscript {
    dependencies {
        /* I'm using the tyrus libraries for my web socket client.
         Because logging is part of the build and not the project, 
         they must be declared classpath and in the buildscript.dependencies
         stanza.    
        */
        classpath 'org.glassfish.tyrus:tyrus-client:1.+'
        classpath 'org.glassfish.tyrus:tyrus-server:1.+'
        classpath 'org.glassfish.tyrus:tyrus-container-grizzly:1.+'
    }
}
//Now we begin the setup for the WebSocket Logger
import org.glassfish.tyrus.client.*;

gradle.useLogger(new WebSocketLogger());

class WebSocketLogger implements org.gradle.api.logging.StandardOutputListener {

    def manager = ClientManager.createClient();
    def session = manager.connectToServer(WebSocketLoggerClientEndpoint.class, java.net.URI.create("ws://localhost:8080/log_viewer/logging"));

    // useLogger replaces the default logging.  I am writing to a tmp file for debugging purposes.
    def tmp = new File('/tmp/log.txt');

    @Override
    void onOutput(CharSequence charSequence) {
        tmp.append(charSequence +"\n");
        session.basicRemote.sendText(charSequence.toString());
    }

    @javax.websocket.ClientEndpoint
    class WebSocketLoggerClientEndpoint {

        @javax.websocket.OnMessage
        public void processMessageFromServer(String message, javax.websocket.Session session) {
            tmp.append(message +"\n");
        }

        @javax.websocket.OnError
        public void handleError(javax.websocket.Session session, Throwable thr) {
            tmp.append('Err' + thr.message +"\n");
        }
    }
}

allprojects {
    repositories {
        jcenter()
    }
}

Java EE Server

The server side was more complex because of how CDI and WebSockets interact in Java EE 7. The code is really simple and benefits much more from browsing in GitHub than in snippets here. You may view the server source here : https://github.com/secondsun/log_viewer².

All this code does is take messages sent to the socket found at “ws://localhost:8080/log_viewer/logging” and rebroadcasts them to “ws://localhost:8080/log_viewer/read”

Conclusion

Being able to rebroadcast log messages is neat and useful. Additionally having a working example for connecting websockets to JMS was a lot of fun to put together.

Foot notes

1: I would have used CDI events, but CDI and the ServerEndpoint annotation do not get along. There are several JIRAs tracking this issue.
* WEBSOCKET_SPEC-196
* JMS_SPEC-121
* CDI-370
2: Thanks to https://blogs.oracle.com/brunoborges/entry/integrating_websockets_and_jms_with for help with getting this working.

Posted in Blogroll

AeroGear Android 3.0

So it has been a while, but AeroGear Android 3.0 is out. As fitting a major number release we have a few breaking changes, lots of bug fixes, and a few new features. The full changelist can be viewed on our JIRA page

Changes

Breaking Changes

New Features

  • aerogear-android-push now uses GCM 3 including GcmListener, InstanceID, and GCM Topics.
  • Android 23 support
  • Material design in cookbooks

Minor Changes

  • JUnit 4 based automated tests
  • Updates to all required libraries (Android SDK, Android Maven Plugin)

How to get it

In Android Studio just declare our dependencies in your build.gradle file. Feel free to mix and match as necessary.

    compile 'org.jboss.aerogear:aerogear-android-core:3.0.0'
    compile 'org.jboss.aerogear:aerogear-android-security:3.0.0'
    compile 'org.jboss.aerogear:aerogear-android-store:3.0.0'
    compile 'org.jboss.aerogear:aerogear-android-pipe:3.0.0'
    compile 'org.jboss.aerogear:aerogear-android-auth:3.0.0'
    compile 'org.jboss.aerogear:aerogear-android-authz:3.0.0'
    compile 'org.jboss.aerogear:aerogear-android-push:3.0.1'

Also, feel free to take our cookbook samples for a spin!

How to get involved

Feel free to join us at #aerogear on IRC, follow us @aerogears on Twitter, or also join our aerogear-dev and aerogear-users mailing lists. For more details please check out our Community Page.

Posted in Blogroll

Eventually Crappy People Show Up, and I’m Flattered

ShopToTrot has been out for a little more than a year now. We've gotten enough users for me to say that there is enough interest to keep going, and that feels really really killer. Like anything that attracts people online, some people that come through can be, well Crappy. There, I said it. By Crappy I mean that they have no intention of using the system as it was intended. They are there just to cause trouble.


Well ShopToTrot has received enough attention that we've gotten our first Crappy user that hung out just to put junk ads into the system. This one did have a sense of humor though. He or She at least mocked us a bit by putting in a photo of their "horse" (seen above) that mirrored our stock photo that we use if you don't put any pictures of your horse in (seen below).


For a moment, I was highly irritated. But that shortly gave way to flattery. Someone cared enough to come into our system and stink it up a bit. My point to all of this is simply, enjoy the Crappy people. They can't help themselves, and truly probably need a hug.

More to the point, they are a signal that you are becoming successful. And... if it weren't for them, you might not harden the system.
Posted in Blogroll

Supporting C in a build system

In a previous post, I showed that the standard build rules for C code are unreliable. Let me describe two ways to do better.

In the interest of brevity, I will describe the build rules using a toy build engine called Blcache (short for "build cache"). I initially tried to write this post using standard tools like Make, Ninja, Bazel, and Nix, but they all have one limitation or another that distracts from the main point of this post.

Example problem

Here is an example problem this post will work against. There is a test.c file, and it includes two separate header files:


// File test.c
#include <stdio.h>
#include "syslimits.h"
#include "messages.h"

int main() {
printf("%s: %d\n", MSG_LIMIT_THREADS, LIMIT_THREADS);
}

// File localhdrs/messages.h
#define MSG_LIMIT_THREADS "Limit on threads:"

// File hdrs/syslimits.h
#define LIMIT_THREADS 10

After compiling this code, a third header file is added as follows:


// File localhdrs/syslimits.h
#define LIMIT_THREADS 500

The challenge is for the addition of this header file to trigger a rebuild, while still making the build as incremental as possible.

Method 1: Compile entire components

The simplest way to get correct C compiles is to compile entire components at a time, rather than to set up build rules to compile individual C files. I've posted before on this strategy for Java, and it applies equally well to C.

This approach seems to be unusual, but my current feel is that it should work well in practice. It seems to me that when you are actively working on a given component, you should almost always use an IDE or other specialized tool for compiling that given component. The build system should therefore not concern itself with fine-grained incremental rebuilds of individual C files. Rebuilds of whole components--executables, libraries, and shared libraries--should be plenty, and even when using a system like Gyp, there are advantages to having the low-level build graph be simple enough to read through and debug by hand.

Using such an approach, you would set up a single build rule that goes all the way from C and H files to the output. Here it is in JSON syntax, using Blcache:


{
"environ": [
"PATH"
],
"rules": [
{
"commands": [
"gcc -Ilocalhdrs -Ihdrs -o test test.c"
],
"inputs": [
"test.c",
"localhdrs",
"hdrs"
],
"name": "c/test",
"outputs": [
"test"
]
}
]
}

The "environ" part of this build file declares which environment variables are passed through to the underlying commands. In this case, only PATH is passed through.

There is just one build rule, and it's named c/test in this example. The inputs include the one C file (test.c), as well as two entire directories of header files (localhdrs and hdrs). The build command for this rule is very simple: it invokes gcc with all of the supplied input files, and has it build the final executable directly.

With the build rules set up like this, any change to any of the declared inputs will cause a rebuild to happen. For example, here is what happens in an initial build of the tool:


$ blcache c/test
Started building c/test.
Output from building c/test:
gcc -Ilocalhdrs -Ihdrs -o test test.c

$ ./target/c/test
Limit on threads: 10

After adding syslimits.h to the localhdrs directory, the entire component gets rebuilt, because the localhdrs input is considered to have changed:


$ blcache c/test
Started building c/test.
Output from building c/test:
gcc -Ilocalhdrs -Ihdrs -o test test.c

$ ./target/c/test
Limit on threads: 500

As a weakness of this approach, though, any change to any C file or any header file will trigger a rebuild of the entire component.

Method 2: Preprocess as a separate build step

Reasonable people disagree about how fine-grained of build rules to use for C, so let me describe the fine-grained version as well. This version can rebuild more incrementally in certain scenarios, but that benefit comes at the expense of a substantially more complicated build graph. Then again, most developers will never look at the build graph directly, so there is some argument for increasing the complexity here to improve overall productivity.

The key idea with the finer-grained dependencies is to include a separate build step for preprocessing. Here's a build file to show how it can be done:


{
"environ": [
"PATH"
],
"rules": [
{
"commands": [
"gcc -c -o test.o target/c/preproc/test.i"
],
"inputs": [
"c/preproc/test:test.i"
],
"name": "c/object/test",
"outputs": [
"test.o"
]
},

{
"commands": [
"gcc -Ilocalhdrs -Ihdrs -E -o test.i test.c"
],
"inputs": [
"test.c",
"localhdrs",
"hdrs"
],
"name": "c/preproc/test",
"outputs": [
"test.i"
]
},

{
"commands": [
"gcc -o test target/c/object/test.o"
],
"inputs": [
"c/object/test:test.o"
],
"name": "c/test",
"outputs": [
"test"
]
}
]
}

This file has three rules in it that chain together to produce the final output file. When you build the test executable for the first time, all three rules will be executed:


$ blcache c/test
Started building c/preproc/test.
Output from building c/preproc/test:
gcc -Ilocalhdrs -Ihdrs -E -o test.i test.c

Started building c/object/test.
Output from building c/object/test:
gcc -c -o test.o target/c/preproc/test.i

Started building c/test.
Output from building c/test:
gcc -o test target/c/object/test.o

$ ./target/c/test
Limit on threads: 10

First, the test.c file is preprocessed, yielding test.i. Second, the test.i file is compiled to test.o. Finally, test.o is linked into the final test executable.

Adding the new syslimits.h file behaves as expected, causing the full chain of recompiles.


$ blcache c/test
Started building c/preproc/test.
Output from building c/preproc/test:
gcc -Ilocalhdrs -Ihdrs -E -o test.i test.c

Started building c/object/test.
Output from building c/object/test:
gcc -c -o test.o target/c/preproc/test.i

Started building c/test.
Output from building c/test:
gcc -o test target/c/object/test.o

$ target/c/test
Limit on threads: 500

Modifying an irrelevant header file, on the other hand, only causes the precompilation step to run. Since the precompilation yields the same result as before, rebuilding stops at that point.


$ touch localhdrs/irrelevant.h
$ blcache c/test
Started building c/preproc/test.
Output from building c/preproc/test:
gcc -Ilocalhdrs -Ihdrs -E -o test.i test.c

Using cached results for c/object/test.
Using cached results for c/test.

It's not shown in this example, but since each C file is compiled individually, a change to a C file will only trigger a rebuild of that one file. Thus, the technique here is fine-grained in two different ways. First, changes to one C file only trigger a recompile of that one file. Second, changes to the H files only trigger preprocessing of all C files, and then only compilation of those C files that turn out to be affected by the H files that were changed.

By the way, there's a trick here that generalizes to a variety of cached computations. If you want to add a cache for a complicated operation like a C compile, then don't try to have the operation itself be directly incremental. It's too error prone. Instead, add a fast pre-processing step that accumulates all of the relevant inputs, and introduce the caching after that pre-processing step. In the case of this example, the fast pre-processing step is, well, the actual C preprocessor.

Coda

Before realizing the problem with C compilation, I used C as an example of why you might want to break the rules a little bit about the most strict and simplistic version of a build cache. However, now it seems to me that you find the best set of build rules if you strictly adhere to a build-cache discipline. I'm sorry, build cache. I should never have doubted you.

Posted in Blogroll

Standard build rules for C are unreliable

The standard way of integrating C into a build system is to use automatic dependencies generated from the compiler. Gcc and Clang can emit a list of the header files they read if you run them with the -M option. Visual Studio can do it as well, using the /showIncludes option. What I will call the "standard approach" in this post is to use the dependencies the user explicitly declared, and then to augment them with automatic dependencies generated by options like -M or /showIncludes.

Until a few years ago, I just took this approach as received wisdom and didn't think further about it. It's a neat trick, and it works correctly in the most obvious scenarios. Unfortunately, I have learned that the technique is not completely reliable. Let me share the problem, because I figure that other people will be interested as well, especially anyone else who ends up responsible for setting up a build system.

The root problem with the standard approach is that sometimes a C compile depends on the absence of a file. Such a dependency cannot be represented and indeed goes unnoticed in the standard approach to automatic dependencies. The standard approach involves an "automatic dependency list", which is a file listing out the automatically determined dependencies for a given C file. By its nature, a list of files only includes files that exist. If you change the status of a given file from not existing, to existing, then the standard approach will overlook the change and skip a rebuild that depends on it.

To look at it another way, the job of a incremental build system is to skip a compile if running it again would produce the same results. Take a moment to consider what a compiler does as it runs. It does a number of in-memory operations such as AST walks, and it does a number of IO operations including reading files into memory. Among those IO operations are things like "list a directory" and "check if a file exists". If you want to prove that a compiler is going to do the same thing on a second run as it did on the first, then you want to prove that those IO operations are going to do the same thing on a second run. That means all of the IO operations, though, not just the ones that read a file into memory.

Such a situation may seem exotic. At least one prominent source has declared that the standard approach is "correct" up to changes in the build command, which suggests to me that the author did not consider this scenario at all. It's not just a theoretical problem, though. Let me show a concrete example of how it can arise in practice.

Suppose you are compiling the following collection of files, including a single C file and two H files:


// File test.c
#include <stdio.h>
#include "syslimits.h"
#include "messages.h"

int main() {
printf("%s: %d\n", MSG_LIMIT_THREADS, LIMIT_THREADS);
}

// File localhdrs/messages.h
#define MSG_LIMIT_THREADS "Limit on threads:"

// File hdrs/syslimits.h
#define LIMIT_THREADS 10
Using automatic dependencies, you set up a Makefile that looks like this:

CFLAGS=-Ilocalhdrs -Ihdrs

test.o test.d : test.c
gcc $(CFLAGS) -M test.c > test.d
gcc $(CFLAGS) -c test.c

test: test.o
gcc -o test test.o

-include test.d

You compile it and everything looks good:


$ make test
gcc -Ilocalhdrs -Ihdrs -M test.c > test.d
gcc -Ilocalhdrs -Ihdrs -c test.c
gcc -o test test.o
$ ./test
Limit on threads: 10
Moreover, if you change any of the input files, including either of the H files, then invoking make test will trigger a rebuild as desired.

$ touch localhdrs/messages.h
$ make test
gcc -Ilocalhdrs -Ihdrs -M test.c > test.d
gcc -Ilocalhdrs -Ihdrs -c test.c
gcc -o test test.o

What doesn't work so well is if you create a new version of syslimits.h that shadows the existing one. Suppose you next create a new syslimits.h file that shadows the default one:


// File localhdrs/syslimits.h
#define LIMIT_THREADS 500

Make should now recompile the executable, but it doesn't:


$ make test
make: 'test' is up to date.
$ ./test
Limit on threads: 10

If you force a recompile, you can see that the behavior changed, so Make really should have recompiled it:


$ rm test.o
$ make test
gcc -Ilocalhdrs -Ihdrs -M test.c > test.d
gcc -Ilocalhdrs -Ihdrs -c test.c
gcc -o test test.o
$ ./test
Limit on threads: 500

It may seem picky to discuss such a tricky scenario as this one, with header files shadowing other header files. Imagine a developer in the above scenario, though. They are doing something tricky, yes, but it's a tricky thing that is fully supported by the C language. If this test executable is part of a larger build, the developer can be in for a really difficult debugging exercise to try and understand why their built executable is not behaving the way that's consistent with the source code. I dare say, it is precisely such tricky situations where people rely the most on their tools behaving in an intuitive way.

I will describe how to set up better build rules for this scenario in a followup post.

Posted in Blogroll

Android N – Security with Self Signed Certificates

If you are a good developer you are securing your services with SSL encryption. Unless you have put in a lot of effort, local testing still uses the good old fashioned self signed certificate and just click through the warning window of shame.

Screenshot from 2016-05-04 13-04-42

This is great until you are writing a RESTful service to be consumed by something which isn’t a browser. If you are an Android developer you have probably come across blog posts (or the official Android docs) encouraging you to make your own Trust Manager to accept your certificate or, worse, disable certificate checking altogether! However, Android N has come to the rescue with new security configuration features.

Using Self Signed Certificates with Android N

To use a self signed certificate you need to

  1. Add a meta-data tag to your AndroidManifest.xml which points to a security configuration xml file
  2. Add to your xml resources directory the security configuration file
  3. Download your self signed certificate to your project

Edit AndroidManifest.xml

I’ve added in my projects the following code to Android Manifest’s application element

<meta-data android:name="android.security.net.config"
               android:resource="@xml/network_security_config" />

This code just informs Android that the configuration file is found in res/xml/network_security_config.xml.

Creating the Network Security Config

The full documentation for the network security files covers a lot more than our use case for a self signed certificate. It is well worth a read to understand what is being done.

Here is my XML file to load my certificate from the raw directory. I have it named server_aerogear_dev, but the file name is irrelevant. What matters is that the common name in the certificate file matches the domain name of the server. I am pretty sure that this also works with IP addresses, but I haven’t tested it.

<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
    <base-config>
        <trust-anchors>
            <certificates src="@raw/server_aergear_dev"/>
        </trust-anchors>
    </base-config>
</network-security-config>

Downloading the certificate

You can download the certificate to the raw directory in your source using your web browser or using the command line.

cd app/src/main/res/raw;
echo -n | openssl s_client -connect server.aerogear.dev:8443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > server_aerogear_dev
// Credit to SO : http://serverfault.com/questions/139728/how-to-download-the-ssl-certificate-from-a-website

Replace the name of the server and the port with configuration appropriate to you.

Final Notes

This is a very simple example of a new feature from Android N. This may change or go out of date. However, this gives us a simple was to manage security and it ALSO works within Android’s build flavor system. Take a look, and stay safe.

Posted in Blogroll

My Manager Hat vs. My Worker Hat

I mentioned a while back that one of the podcasts that I've been listening to is Simple Programmer, by John Sonmez. He gave some advice to a listener, and I've had it rolling around in my head since I heard it from Episode 120: Do I Have To Be An Entrepreneur To Be Successful. Just recently I've started to walk in that piece of advice.

I may get some of the particulars incorrect, but the overall gist will be in the spirit of the exchange.

John told the guy that to make progress in his extra goals, he needed to have the ability to wear two hats.  As I remember them, they were:
  1. A Manager Hat 
  2. A Worker Hat (I lovingly call this one the Doer or Executor Hat so if you see me use those alternate forms you’ll know to what personality I’m referring to). 




The Manager Hat:

This is the hat that you wear when you're deciding what should be accomplished by your Worker. So effectively, what he was saying is that you need to take some time to plan out your week. Be deliberate about setting goals for what would get accomplished during that week so you can keep sight of them. I've chosen Sunday night at 9:30 for this hat to adorn my head.




The Worker Hat:

The other hat he mentioned you needed to pick up is a Worker Hat (or a Doer Hat). This is the hat you put on once the Manager Hat comes off. You wear this hat for the rest of the week. Once this hat comes on you simply execute the plan unquestioningly. You don’t deviate from it, you just simply do what your Manager has planned for your Doer.

Once you get through the week, wash, rinse, and repeat for the following week.



In Practice (as I have experienced it thus far):

As I have tried to operate in this, I’ve discovered that my Manager always has unrealistic expectations of what my Worker can accomplish. Big surprise right? That’s often the case in the real world, outside of this little dual personality experiment that I’m doing. So what does that really mean you may ask. Well, for me it’s meant that what my Manager has scheduled for a particular day might not be what my Worker has been able to accomplish for that day. And there isn’t anything scheduled for this thing that my Worker hasn’t finished on the next day coming up.

Sooooo, my Worker had to negotiate with my Manager. The Worker had to tell the Manager, “Hey dude, I’m not done. I haven’t been slacking, it’s just that it’s taking me longer… or life has happened and inserted itself in the middle of your pre-scheduled routine, thus disrupting the schedule."  If you’re anything like me, you’re trying to accomplish these things in the midst of working at your day job, raising a family, and potentially working on these things with other people. Any one of those three things tend to try and get in the middle of your Worker accomplishing what your Manager has laid out. It happens, we’re human, and plans may need to change.

So how have my Manager and Worker adjusted expectations of what will get accomplished. Well, the Worker starts out the week working diligently on those things scheduled by the Manager. And if the Worker isn’t able to complete a day's tasks, then the unfinished task/s get pushed to the next day by the Worker, and the things that were scheduled for the next day get pushed accordingly through the week. The Manager has placed a bucket at the end of the week to catch anything that gets pushed completely out of the week, and things are pulled from that bucket for scheduling the next week.

This way, the Worker has some sense of continuity and doesn’t get disgruntled with not having completed anything at the end of the week, but has spent it thrashing about trying to accomplish everything (when that might not be possible). It’s important for progress and the Worker’s feelings and sense of accomplishment to complete things and get them behind him. If the schedule needs to be adjusted then so be it, the Manager needs to compromise.

This has made the things that I've been trying to accomplish more concrete. And by "more concrete" I mean that there are very real time slots for work to slip into, and expectations on myself that those things scheduled (and only those things scheduled) should be completed.

Life is going to happen. Your plans will get disrupted. You need to be able to adjust, and in the adjustment not loose focus on what your goals are.




In the closing, figuring out the balance between your Manager Hat and your Worker Hat can be Detective work for a while. But in the end you can find some happy medium and hopefully some cadence as to how much you’ll be able to accomplish in your upcoming week.
Posted in Blogroll

Two little things I wish Java would add

When geeking out about language design, it's tempting to focus on the things that require learning something new to even understand how it works. SAM types require understanding target typing, and type members require understanding path-dependent types. Fun stuff.

Aside from these things that are fun to talk about over beers, I really wish Java would pick up a few things from Scala that are just plain more convenient.

Multi-line string literals

A great way to structure a unit test is to feed in a chunk of text, run some processing that you want to verify, convert the actual output to text, and then compare it against another chunk of text that's included in the test case. Compared to a dense string of assertEquals calls, this testing pattern tends to be much easier to read and understand at a glance. When such a test fails, you can read a text diff at a glance and possibly see multiple different kinds of failure that happened with the test, rather than stare into the thicket of assertEquals calls and try to deduce what is being tested by the particular one that failed.

The biggest weakness of this style is very mundane: it's hard to encode a multi-line chunk of text in Java. You have to choose between putting the text in an external file, or suffering through strings that have a lot of "\n" escapes in them. Both choices have problems, although the latter option could be mitigated with a little bit of IDE support.

In Scala, Python, and many other languages, you can write a multi-line string by opening it with triple quotes (""") rather than a single quote mark ("). It's a trivial feature that adds a lot to the day to day convenience of using the language.

As one trick to be aware of, it's important to help people out with indentation when using triple quotes. In Scala, I lobbied for the stripMargin approach to dealing with indentation, where you put a pipe on each continuation line, and anything up to the pipe is considered leading indentation and removed. In retrospect, I wish I had pushed for that to simply be the default behavior. If you need to insert a literal continuation character, you can always write it twice. Making people write stripMargin on almost every multi-line string is a form of boilerplate.

Case classes

There are philosophers who disagree, but I find them a little too philosophical for my taste. Sometimes you really want to write a class that has no hidden internal state. Sometimes it would be a breach of the API to retain any internal state, or to implement the public API as anything other than plain old final fields. Some motivating examples are: tiny types, data structure nodes such as links in a linked list, and data-transfer objects.
In such a case, it takes a tremendous amount of code in Java to implement all the odds and ends you would really like for such a class. You would really like all of the following, and they are all completely mechanical:
  • Constructors that copy their parameters to a series of final fields.
  • A toString() implementation.
  • Comparison operations: equals(), hashCode(), and compareTo(). Ideally also helpers such as isLessThan().
  • Copy constructors that make a new version by replacing just one of the fields with a new value.
The equals() method is particularly painful in Java because there is a lot of advice going around about how to write them that is not consistent. I've been drug into multi-day debates on equals() methods where people cite things I published in the past to try and use against me; I'm pretty sure I meant what I said then and mean what I say now. Above all, though, I'd rather just have a reasonable equals() method and not spend time talking about it.
Posted in Blogroll

Spring Boot with JSPs using Undertow

This is a follow-up to my previous post Spring Boot with JSPs in Executable Jars.

Undertow is another alternative for using an embedded container with Spring Boot. You can find general information in the Spring Boot reference guide chapter Use Undertow instead of Tomcat. While I was working on updating the Spring Boot documentation regarding the JSP support for Tomcat, I noticed the following line in the reference guide for Spring Boot 1.3.3:

"Undertow does not support JSPs."

Being a good citizen, I dug a little deeper and discovered the Undertow JSP sample application by Chris Grieger. It turns out that Undertow has indeed JSP support by using jastow, which is a Jasper fork for Undertow. The key was to adapt the Undertow JSP sample application for Spring Boot. Doing so was actually fairly straightforward. The actual Undertow configuration uses Spring Boot`s EmbeddedServletContainerCustomizer:


final UndertowDeploymentInfoCustomizer customizer = new UndertowDeploymentInfoCustomizer() {

@Override
public void customize(DeploymentInfo deploymentInfo) {
deploymentInfo.setClassLoader(JspDemoApplication.class.getClassLoader())
.setContextPath("/")
.setDeploymentName("servletContext.war")
.setResourceManager(new DefaultResourceLoader(JspDemoApplication.class))
.addServlet(JspServletBuilder.createServlet("Default Jsp Servlet", "*.jsp"));

final HashMap<String, TagLibraryInfo> tagLibraryInfo = TldLocator.createTldInfos();

JspServletBuilder.setupDeployment(deploymentInfo, new HashMap<String, JspPropertyGroup>(), tagLibraryInfo, new HackInstanceManager());

}
};

The full source is available in the JspDemoApplication class. The main issue is more or less the retrieval and configuration of the used Taglibraries. The Undertow JSP sample provides the TldLocator class, which does the heavy lifting. For our example, I am adapting that class so that it works in the context of Spring Boot. In Spring Boot we are dealing with über-Jars, meaning the resulting executable jar file will contain other jar files representing its dependencies.

Spring provides some nifty helpers to retrieve the needed Tag Library Descriptors (TLD) files. In TldLocator#createTldInfos I use a ResourcePatternResolver, specifically a PathMatchingResourcePatternResolver with a location pattern of classpath*:**/*.tld.


final URLClassLoader loader = (URLClassLoader) Thread.currentThread().getContextClassLoader();

final ResourcePatternResolver resolver = new PathMatchingResourcePatternResolver(loader);
final Resource[] resources;
final String locationPattern = "classpath*:**/*.tld";

try {
resources = resolver.getResources(locationPattern);
}
catch (IOException e) {
throw new IllegalStateException(String.format("Error while retrieving resources"
+ "for location pattern '%s'.", locationPattern, e));
}


Important

Don’t forget the asterix right after classpath. The classpath*: allows you to retrieve multiple class path resources with the same name. It will also retrieve resources across multiple jar files. This is an extremely useful feature. For more information please see the relevant JavaDocs for PathMatchingResourcePatternResolver.

Once we have the TLD resources, they will be parsed and ultimately used to create a collection of org.apache.jasper.deploy.TagLibraryInfo. With those at hand, we create a JSP deployment for Undertow using the DeploymentInfo and the TagLibraryInfo collection.


final HashMap<String, TagLibraryInfo> tagLibraryInfo = TldLocator.createTldInfos();
JspServletBuilder.setupDeployment(deploymentInfo, new HashMap<String, JspPropertyGroup>(), tagLibraryInfo, new HackInstanceManager());

And that’s it. Simply build and run the application and you should have a working JSP-based application.


$ mvn clean package
$ java -jar jsp-demo-undertow/target/jsp-demo-undertow-1.0.0-BUILD-SNAPSHOT.jar

In your console you should start seeing how the application starts up.



Once started, open your browser and go to the following Url http://localhost:8080/.


You can find the full source-code for this sample at https://github.com/ghillert/spring-boot-jsp-demo
Posted in Blogroll
AJUG Meetup
'system_date', 'orderby' => 'meta_value', 'posts_per_page' => 1, 'order' => 'ASC', 'category__in' => array($cat_id))); if( $latest_cat_post->have_posts() ) : while( $latest_cat_post->have_posts() ) : $latest_cat_post->the_post(); ?>

$value ) { echo "$value"; } ?>

$value ) { echo "$value"; } ?>
Location:
$value ) { $special_location = $value; } if ( $special_location != "") {?>

Cobb Galleria Centre

Holiday Inn Atlanta-Perimeter/Dunwoody 4386 Chamblee Dunwoody Road, Atlanta, GA (map)

AJUG Tweets

Follow @atlantajug on twitter.

Recent Jobs