Security with KeyCloak and Google Services on Android

The KeyCloak project is a phenomenal resource for authentication and authorization services for almost any application. The default use case for it involves using OAuth redirects to send a client to a web page hosted by KeyCloak or a trusted Identity Provider, perform a log in, and the exchange tokens on the client application. This prevents potentially untrusted clients from stealing logins from users while also allowing trustworthy applications to log in using a third party. This is how websites are able to use social logins from Google, Facebook, Github, etc. However on Android the operating system provides many ways to manage log ins locally via an account picker. Google goes one step further and can provide tokens via its client sdk.

Because Google will provide a application with a token directly, we can bypass the website redirect skip on Android with Keycloak by using “External to Internal Token Exchange“. The Keycloak documentation will walk you through setting up the IdP, but you have to make sure that you configure the Google IdP as a “OpenID Connect v1.0” provider and not as a “Google” provider. Fortunately you can use Google’s well-known OpenID configuration to prepopulate most of the fields. One thing I had to change was flipping “Disable User Info” to “ON”. In order to fetch the user info, Google needs an bearer token that Keycloak does not provide.

I’ve made a simple Android application which exchanges a Google token for a Keycloak token here. The source code is on my GitHub and a demo is on my YouTube channel.

Posted in Blogroll

AeroGear Unified Push and OpenShift Origin


For people who are making mobile applications, managing the various server components and technologies is a challenge.
This post will demonstrate how to use OpenShift Origin to host an instance of Aerogear Unified Push Server and then start development on an Android application on your development machine.
AeroGear Unified Push Server is a project that provides a single service to manage multiple push networks for your mobile applications. OpenShift Origin is a container management platform build on Kubernetes and Docker technologies. Both are Open Source and sponsored by RedHat.

Getting and Starting OpenShift Origin


Before you can use Origin you need to have Docker installed on your computer as well as configure its to use OpenShift’s internal Docker registry. You can do this by passing the following parameter to Docker when the service starts --insecure-registry System specific details can be found here.

You will also need the tool oc. It can be found on the Origin’s GitHub releases page.

Launching OpenShift Origin

oc cluster up.

Yup that is all you need. Now you can browse to and login as developer:developer.

Deploying AeroGear UPS

We will use the Unified Push Server image found on Docker Hub. Reviewing that page we see that it needs two MySQL instances named “unifiedpush” and “keycloak”. We can create those in Origin, deploy UnifiedPush, and then add a route for our Android emulator.

Deploying MySQL containers

oc new-app mysql MYSQL_USER=unifiedpush MYSQL_PASSWORD=unifiedpush MYSQL_DATABASE=keycloak --name=keycloak

oc new-app mysql MYSQL_USER=unifiedpush MYSQL_PASSWORD=unifiedpush MYSQL_DATABASE=unifiedpush --name=unifiedpush

You can watch both systems come online in your console, or view their status with oc status.

Deploying AeroGear UnifiedPush

oc new-app aerogear/unifiedpush-wildfly \
 UNIFIEDPUSH_PORT_3306_TCP_ADDR=unifiedpush \
 KEYCLOAK_PORT_3306_TCP_ADDR=keycloak \

This command will deploy UnifiedPush and connect it to your MySQL containers. If you are work with Docker then the environment variables we use should be familiar; that is because this is a plain Docker container.

Adding Routes to the UnifiedPush application

By default OpenShift applications are not routable. This means that we can not point a web browser to UnifiedPush to configure it nor can an Android device connect to it to receive push messages. We will add two routes to enable both of these behaviors.

oc expose service unifiedpush-wildfly --port=8080-tcp --name=ups-local-unsecured
oc expose service unifiedpush-wildfly --port=8080-tcp --name=ups-android-unsecured

The first route exposes UPS on the local machine’s IP address, and the second exposes on which is the hardcoded IP address for the host of an Android emulator. These hostnames use to fake the DNS lookup.

Fixing Keycloak Linking

UPS uses an embedded Keycloak server to handle its authentication. Right now there is an issue when the UPS pod starts up it can’t route to Keycloak using the public route. To work around this you will need to use the OpenShift web console to navigate to the unifiedpush-wildfly pod and execute “wget http://localhost:8080/ag-push/index.html” in the terminal after the application loads.

Configuring UnifiedPush

You should be able to navigate to now and set up UnifiedPush. The full instructions for configuring this can be found on

Configuring you Android Push Application

You can follow the HelloPush tutorial here. The only change is you will use “” as your push URL.

Posted in Blogroll

There Is No Such Thing As A Short Term Hack

One day during a review of some changes, a new user interaction was introduced that was basically making use of a toggle in the place of a checkbox. This gave rise to the desire to make other parts of the system have one user control for this behavior. It was said that the change in one spot to use a toggle, with other spots still using the old antiquated checkbox, would confuse users. We should instead take time to understand how far the change would need to extend.

Fast forward a couple of months, one early morning I suggested to a colleague that we create an email group for our team. He said "yeah, but then you have to keep up with it when people come and go, and it becomes a big pain and get's out of date and kind of a hassle" to which I replied something to the effect of "welcome to my world and what it's like to maintain software".

A parallel to both of these is when we collectively as a team say, "Yeah, let's go with the hack for now so we can release...and then later we'll come back and clean it up." What's the commonality? Where is the parallel? All three are a point in time where a decision has to be made about maintenance in one form or another.

What does a hack look like? It comes in many forms. Oddly placed "if(...)" checks sprinkled around. Ever growing indentation levels in a method, indicating harder and harder to understand logic to get you down into those inner most levels. Duplicated identical switch statements hiding classes that are waiting to be born. Code smells sprinkled here and there, and... and... and everywhere.

This business decision of a "short term hack" leaves a bunch of garbage laying around in the code. As a team, we may have great intentions of "coming back to clean it up". But the reality is, that rarely happens... and when it does, it can be the exceptional case. Look, I get it. There is often not any perceived value of coming back to take care of it. But I as a user of that code, have to deal with that. And so do the poor schlubs that I work with that bump into that hack and are left scratching their heads.

It's confusing on several orders of magnitude larger than someone having to handle grocking a checkbox vs. a toggle, or to maintain an email list. Over time, more and more of these "compromises" cause the code to become more and more brittle. It becomes harder and harder to change. Harder and harder to understand. To reason about. The compromise is on the side of the code. The health of the artifact that should be an enabler of speed... of our ability to pivot late in the release cycle... rather than a drag on our speed, becomes more and more un-healthy.

When things begin to take longer and longer to deliver, do we ever look at one another as a team and say "well, this is because we have made some "compromising" decisions along the way that have affected the code adversely and now we're feeling it". I think that's often not the case. Sometimes it is, don't get me wrong. Sometimes we do recall those compromises. And sometimes we take the time to inject some health back into our code artifact.

But, more often is the case that we have lost all context from those far off historical decisions, and now we just feel the pain. Whether it be from too much time passing, or because people that came to the initial "compromise for a short term hack" are no longer around working on the code / on the project / with the company. Or maybe they are still around, but they are not the one staring that compromise in the face at the moment, you're the poor schlub I mentioned above. And there is no way for you to recapture that intimate moment of the past where yes we were able to deliver back then... but now... now you have a story on your Kanban board that's taking forever to make it out of the "Developing" column. So the "compromise" has come back and is staring the larger company straight in the face, specifically you... your product owner... your team. And there you stand as a developer with a cracked egg all over your face. Given enough complaining room and frustration and you might just start to sound like a crotchety wind bag full of hot air about how difficult the code base is. Product owners don't care to hear about the difficulties of the code... they would much rather hear that your story is complete and your ready to take on more work.

Being honest about what it takes to keep a code base healthy is not an easy thing. It's soo necessary though. Be a professional. Do the right thing by the code, and keep lines of communication open between you and your team about what your doing and how it's coming. No, you can't fix the whole system in the course of one story. But yes, you can chew on a mouthful of it. Always making small improvements, and biting off some larger ones along the way can breath new life back into the code asset as well as your team.

Fight like hell to keep them out. Don't offer them up as an option. But (and I do mean but), if you do have add in a "short term hack" in order to deliver (let's all be honest here... it happens... delivery has to happen, it's what keeps the lights on), insist at the very least that a story is created to come back and address the hack. Leave some artifact as a conversation piece to come back to. And make sure that story has plenty of detail around it, capturing as much of the moment as possible to get you back into the context of when the "compromise for the hack" was made. And then fight like hell to play that story as soon as possible.

Short term hacks become long term hacks that harm code agility. Even more, they become longer term problems as you design more and more of your system around them... making additional compromises... predicated on the far off and often forgotten past compromises that the company as a whole made. Keep the code agile, and your company will be too.
Posted in Blogroll

Integrating Stack Exchange and JIRA


We are working to get the Red Hat Mobile Application Platform (RHMAP for brevity) Open-Sourced as FeedHenry and have discussed the usual array of community support options : mailing-lists, IRC, GitHub, JIRA, etc. However, the community is already using Stack Overflow to ask questions and get help from one another.

This leads to a question though : how do we integrate Stack Overflow monitoring with our current workflows? Enter so-monitor, a simple node.js application to watch Stack Overflow for questions and track their status in a JIRA. The project creates a ticket when a question is asked using a tag we monitor and then will close the JIRA ticket when the question is marked as answered.

Setup of Development Environment

I decided to write the monitor as a Node.js application, and that I would use Visual Studio Code as my IDE. Being a Java developer I am used to many tooling features: intellisense, robust debugging, etc. Also being a Java developer I had heard that most of these things were just not as mature in the Node.js ecosystem. While I feel this is still true, VS Code and [Definitely Typed|] have closed the gap significantly. If you are interested in learning more about this setup I suggest following MS’s blog post. It was very helpful to me.

Consuming the Stack Exchange API

The Stack Exchange API is pretty standard as far as RESTful APIs go. There is a public API that with a limited quota expands more once you get an API key. The biggest difficulty I had was with their filters concept. Basically in Stack Exchange sites you can create a filter for your API calls that lives on their servers and then reference it with a stable, public ID.

I used Moveo’s Stack Exchange API wrapper to access the API and it worked rather well.

JIRA’s black magic

For as simple as Stack Overflow was, JIRA was not. However, JIRA is 100% scriptable and you can configure it to respond to nearly any requirement you may have. Putting a question into JIRA was basically four steps

  • Create the ticket
  • Formatting the question as a issue
  • Tagging my team
  • Closing the issue when the question was answered.

To wrap the JIRA REST API I used steve’s node jira library.

The trickiest part of the process was keeping the issue and the question states in sync. I used a custom field to track the question id returned by the Stack Exchange APIs, but managing the state hit an unexpected snag.

JIRA uses a transitions metaphor for changing the state of a ticket. This means there isn’t a “close issue” API call, but instead you have to upload a “transition” object to the issue endpoint that defines which transition you wish the issue to have taken. This means that you have to either a) check the state of the JIRA issue, lookup a transition, and execute that one or b) create a “Any State” -> “Close” transition in JIRA and hard code that. I chose “b”. For more information I encourage you to read JIRA’s API docs. They are really good and it is a very different pattern than Stack Exchange uses.


While the Stack Exchange and JIRA APIs were the main meat of the monitor project, there were many small “learnings” I had above and beyond that. VS Code and typings is a wonderful JavaScript experience. Using Q promises and wrapping node calls in Promises with simple one liners made my main code much easier to follow. Node.js and the JavaScript ecosystem has come phenomenally far in four years and it is finally as productive as more “Enterprisey” technologies like .Net and Java.

Posted in Blogroll

JRebel and WildFly Swarm in NetBeans

I’m not a great company man; I use NetBeans instead of RedHat’s JBDS. However, I am at least a GOOD company man because I am using WildFly Swarm instead of Sprint Boot. I also use JRebel because I don’t like wasting my time waiting on my projects to build. The combination of the three, however, gave me quite a headache today.

First, NetBeans is the only IDE that gets Maven correct. A Maven project is almost always a valid NetBeans project without any extra configuration. On top of this NetBeans makes it very easy to configure custom Maven goals and map them to your standard IDE actions. Second, WildFly Swarm is an awesome project that is basically Spring Boot but with real, rightsized Java EE. It also has awesome Maven support and I can’t say enough good things about how it. Finally, JRebel should need no introduction. It hot deploys Java code to just about anything and makes the write -> run -> debug cycle as fast as it can be.

My problems began when I imported my Swarm project into NetBeans. NetBeans recognized the fact the project was a war file and offered to deploy it to my installed WildFly server instead of running the Swarm plugin’s run goal. I created a custom NetBeans action to run the Swarm goal. This worked perfectly and all that was missing was JRebel.

JRebel did not want to play nice with the Swarm run goal. I’m not sure why, but eventually I decided to give running the project using the maven exec goal and passing in the Swarm Main class. This worked, but NetBeans wasn’t loading JRebel right out of the box. Finally, I copied over the exec arguments from a working Jar project into my custom run goal and JRebel successfully started (with errors that don’t seem to matter). Hot deploy worked!

If you are wondering here is my configuration :

Execute Goals : process-classes org.codehaus.mojo:exec-maven-plugin:1.2.1:exec
Set Properties : exec.args=-Dexec.args=-Drebel.env.ide.plugin.version=6.5.1 -Drebel.env.ide.version=8.2 -Drebel.env.ide.product=netbeans -Drebel.env.ide=netbeans -Drebel.base=/home/summers/.jrebel -Drebel.notification.url=http://localhost:17434 -agentpath:/home/summers/netbeans-8.2/java2/griffin/lib/ -classpath %classpath org.wildfly.swarm.Swarm

So with that all working my Project Properties looks like this :


Posted in Blogroll

Implementing Chip-8

It has been a hobby of mine to create a video game console, but we live in a world where anyone can buy a $35 Raspberry Pi and install every 8, 16, and 32 bit game on it. So I have focused more on how consoles and software work and are made than actually making one. As part of this I have actually implemented an emulator of sorts, “Chip-8”.

Chip 8 was not a game console, but instead it is a byte-code interpreter that was originally run on 8-bit computers in the 1970s. It is, however, a good beginners project. My implementation took about 2 weeks to program and test. I have had a lot of success with running roms I have found online, and I am documenting it to share as a learning project for people that want to learn more about emulation, low level programming, or just like programming.

Now, enjoy my demo :

Posted in Blogroll

Undertow Websocket Client in Java SE

Undertow is a web server written in Java by JBoss. It uses a very modular architecture which allows the developer to pick and choose features they need so it fits anywhere from a Java EE web-server (Wildfly) to being embedded in a JavaSE application (the topic of this blog). Actually, this post isn’t about a web server AT ALL! This post is about using Undertow’s web socket library in a Java FX application to allow a server to send real time messages.

I’ve been working on a side project to make managing my Github notifications easier. The general idea is that a server will record my notifications, apply basic filtering on them, and then show me notifications ranked “Low”, “Medium”, and “High” priority. The UI of this application will be provided by Java FX and run standalone on the user’s system. A brief demo can be found on YouTube.

I originally tried the WebSocket implementation by TooTallNate, but they wouldn’t work correctly in my situation. I settled on the Undertow implementation mostly because I work for RedHat and can harass the developers on internal channels; however, this wasn’t needed.

The Undertow page mostly deals with server to server communication and the only WebSocket client example I could find that spoke to my situation was in a single unit test. However, I was able to cargo cult most of a working project, but I need to learn more about Xnio.


The first thing I had to do was configure a XnioWorker. This static block is mostly cargo culture so I would refer to the official Xnio site before putting this into production.

    private static XnioWorker worker;

    static {
        try {
            worker = Xnio.getInstance().createWorker(OptionMap.builder()
                    .set(Options.WORKER_IO_THREADS, 2)
                    .set(Options.CONNECTION_HIGH_WATER, 1000000)
                    .set(Options.CONNECTION_LOW_WATER, 1000000)
                    .set(Options.WORKER_TASK_CORE_THREADS, 30)
                    .set(Options.WORKER_TASK_MAX_THREADS, 30)
                    .set(Options.TCP_NODELAY, true)
                    .set(Options.CORK, true)
        } catch (IOException | IllegalArgumentException ex) {
            Logger.getLogger(WebsocketProvider.class.getName()).log(Level.SEVERE, null, ex);
            throw new RuntimeException(ex);

Next, I created a static method to create a WebSocketClient instance and connect to my server. Because I am using self signed certificates as part of testing, I implemented a method to create an appropriate socket.

    public static WebSocketChannel getWebsocketClient(URI serverURI, String bearerToken, EventBus bus) {
        try {

            WebSocketClient.ConnectionBuilder builder = WebSocketClient.connectionBuilder(worker, new DefaultByteBufferPool(false, 2048), serverURI);
            builder.setClientNegotiation(new WebSocketClientNegotiation(null, null){
                public void beforeRequest(Map<string , List<String>> headers) {
                    headers.put("Authorization", Lists.newArrayList("bearer " + bearerToken));

            if ("wss".equals(serverURI.getScheme())) {

            WebSocketChannel channel = builder.connect().get();

            channel.getReceiveSetter().set(new AbstractReceiveListener() {
                protected void onFullTextMessage(WebSocketChannel channel, BufferedTextMessage message) throws IOException {


                protected void onText(WebSocketChannel webSocketChannel, StreamSourceFrameChannel messageChannel) throws IOException {
                    super.onText(webSocketChannel, messageChannel); //To change body of generated methods, choose Tools | Templates.

                protected void onError(WebSocketChannel channel, Throwable error) {
                    super.onError(channel, error);
                    Logger.getLogger(WebsocketProvider.class.getName()).log(Level.SEVERE, error.getMessage(), error);


            return channel;
        } catch (IOException | CancellationException ex) {
            Logger.getLogger(WebsocketProvider.class.getName()).log(Level.SEVERE, null, ex);
            throw new RuntimeException(ex);


There are three pieces of code to pay special attention to channel.resumeReceives();, the WebSocketClientNegotiation implementation and the AbstractReceiveListener implementation. The first is necessary to receive messages from the server (I don’t know why, can someone from Undertow shed some light). The second adds a Bearer token authorization header so the server can authenticate the user with KeyCloak. The last is the actual handler for messages from the server. Currently it posts the message to an EventBus that various components are subscribed to.

There we have it! A very VERY simple websocket client for my Java FX application. If you want to play with it you can find the server and client source codes on my GitHub.

Posted in Blogroll

Reporting Gradle Builds using WebSockets

If you have a build server you might want to receive reporting from your build. Many build bots offer this kind of reporting, but I decided to implement it myself in a standard Gradle build script. I override the default logger and replace it with one that writes all of the logging to a web socket. I have also created a very simple Java EE service which can consume logging messages and rebroadcast them to a different web socket using JMS¹.

Gradle Configuration

buildscript {
    dependencies {
        /* I'm using the tyrus libraries for my web socket client.
         Because logging is part of the build and not the project, 
         they must be declared classpath and in the buildscript.dependencies
        classpath 'org.glassfish.tyrus:tyrus-client:1.+'
        classpath 'org.glassfish.tyrus:tyrus-server:1.+'
        classpath 'org.glassfish.tyrus:tyrus-container-grizzly:1.+'
//Now we begin the setup for the WebSocket Logger
import org.glassfish.tyrus.client.*;

gradle.useLogger(new WebSocketLogger());

class WebSocketLogger implements org.gradle.api.logging.StandardOutputListener {

    def manager = ClientManager.createClient();
    def session = manager.connectToServer(WebSocketLoggerClientEndpoint.class,"ws://localhost:8080/log_viewer/logging"));

    // useLogger replaces the default logging.  I am writing to a tmp file for debugging purposes.
    def tmp = new File('/tmp/log.txt');

    void onOutput(CharSequence charSequence) {
        tmp.append(charSequence +"\n");

    class WebSocketLoggerClientEndpoint {

        public void processMessageFromServer(String message, javax.websocket.Session session) {
            tmp.append(message +"\n");

        public void handleError(javax.websocket.Session session, Throwable thr) {
            tmp.append('Err' + thr.message +"\n");

allprojects {
    repositories {

Java EE Server

The server side was more complex because of how CDI and WebSockets interact in Java EE 7. The code is really simple and benefits much more from browsing in GitHub than in snippets here. You may view the server source here :².

All this code does is take messages sent to the socket found at “ws://localhost:8080/log_viewer/logging” and rebroadcasts them to “ws://localhost:8080/log_viewer/read”


Being able to rebroadcast log messages is neat and useful. Additionally having a working example for connecting websockets to JMS was a lot of fun to put together.

Foot notes

1: I would have used CDI events, but CDI and the ServerEndpoint annotation do not get along. There are several JIRAs tracking this issue.
* JMS_SPEC-121
* CDI-370
2: Thanks to for help with getting this working.

Posted in Blogroll

AeroGear Android 3.0

So it has been a while, but AeroGear Android 3.0 is out. As fitting a major number release we have a few breaking changes, lots of bug fixes, and a few new features. The full changelist can be viewed on our JIRA page


Breaking Changes

New Features

  • aerogear-android-push now uses GCM 3 including GcmListener, InstanceID, and GCM Topics.
  • Android 23 support
  • Material design in cookbooks

Minor Changes

  • JUnit 4 based automated tests
  • Updates to all required libraries (Android SDK, Android Maven Plugin)

How to get it

In Android Studio just declare our dependencies in your build.gradle file. Feel free to mix and match as necessary.

    compile 'org.jboss.aerogear:aerogear-android-core:3.0.0'
    compile 'org.jboss.aerogear:aerogear-android-security:3.0.0'
    compile 'org.jboss.aerogear:aerogear-android-store:3.0.0'
    compile 'org.jboss.aerogear:aerogear-android-pipe:3.0.0'
    compile 'org.jboss.aerogear:aerogear-android-auth:3.0.0'
    compile 'org.jboss.aerogear:aerogear-android-authz:3.0.0'
    compile 'org.jboss.aerogear:aerogear-android-push:3.0.1'

Also, feel free to take our cookbook samples for a spin!

How to get involved

Feel free to join us at #aerogear on IRC, follow us @aerogears on Twitter, or also join our aerogear-dev and aerogear-users mailing lists. For more details please check out our Community Page.

Posted in Blogroll

Eventually Crappy People Show Up, and I’m Flattered

ShopToTrot has been out for a little more than a year now. We've gotten enough users for me to say that there is enough interest to keep going, and that feels really really killer. Like anything that attracts people online, some people that come through can be, well Crappy. There, I said it. By Crappy I mean that they have no intention of using the system as it was intended. They are there just to cause trouble.

Well ShopToTrot has received enough attention that we've gotten our first Crappy user that hung out just to put junk ads into the system. This one did have a sense of humor though. He or She at least mocked us a bit by putting in a photo of their "horse" (seen above) that mirrored our stock photo that we use if you don't put any pictures of your horse in (seen below).

For a moment, I was highly irritated. But that shortly gave way to flattery. Someone cared enough to come into our system and stink it up a bit. My point to all of this is simply, enjoy the Crappy people. They can't help themselves, and truly probably need a hug.

More to the point, they are a signal that you are becoming successful. And... if it weren't for them, you might not harden the system.
Posted in Blogroll
AJUG Meetup

Building and Deploying 12 Factor Apps in Scala and Java

June 20, 2017

The twelve-factor app is a modern methodology for building software-as-a-service apps: • Use declarative formats for setup automation, to minimise time and cost for new developers joining the project. • Have a clean contract with the underlying operating system, offering maximum portability between execution environments. • Are suitable for deployment on modern cloud platforms, obviating the need for servers and systems administration. • Minimise divergence between development and production, enabling continuous deployment for maximum agility. • And can scale up without significant changes to tooling, architecture, or development practices. We will build a RESTful web service in Java and deploy the app to CloudFoundry. We will go over how to build a cloud manifest, how to keep our database credentials and application configuration outside of our code by using user-provided services and go over what it takes to build a 12 Factor application in the cloud. This presentation will be heavy on code and light on slides!

Holiday Inn Atlanta-Perimeter/Dunwoody 4386 Chamblee Dunwoody Road, Atlanta, GA (map)

AJUG Tweets

Follow @atlantajug on twitter.

Recent Jobs