Blog

Supporting C in a build system

In a previous post, I showed that the standard build rules for C code are unreliable. Let me describe two ways to do better.

In the interest of brevity, I will describe the build rules using a toy build engine called Blcache (short for "build cache"). I initially tried to write this post using standard tools like Make, Ninja, Bazel, and Nix, but they all have one limitation or another that distracts from the main point of this post.

Example problem

Here is an example problem this post will work against. There is a test.c file, and it includes two separate header files:


// File test.c
#include <stdio.h>
#include "syslimits.h"
#include "messages.h"

int main() {
printf("%s: %d\n", MSG_LIMIT_THREADS, LIMIT_THREADS);
}

// File localhdrs/messages.h
#define MSG_LIMIT_THREADS "Limit on threads:"

// File hdrs/syslimits.h
#define LIMIT_THREADS 10

After compiling this code, a third header file is added as follows:


// File localhdrs/syslimits.h
#define LIMIT_THREADS 500

The challenge is for the addition of this header file to trigger a rebuild, while still making the build as incremental as possible.

Method 1: Compile entire components

The simplest way to get correct C compiles is to compile entire components at a time, rather than to set up build rules to compile individual C files. I've posted before on this strategy for Java, and it applies equally well to C.

This approach seems to be unusual, but my current feel is that it should work well in practice. It seems to me that when you are actively working on a given component, you should almost always use an IDE or other specialized tool for compiling that given component. The build system should therefore not concern itself with fine-grained incremental rebuilds of individual C files. Rebuilds of whole components--executables, libraries, and shared libraries--should be plenty, and even when using a system like Gyp, there are advantages to having the low-level build graph be simple enough to read through and debug by hand.

Using such an approach, you would set up a single build rule that goes all the way from C and H files to the output. Here it is in JSON syntax, using Blcache:


{
"environ": [
"PATH"
],
"rules": [
{
"commands": [
"gcc -Ilocalhdrs -Ihdrs -o test test.c"
],
"inputs": [
"test.c",
"localhdrs",
"hdrs"
],
"name": "c/test",
"outputs": [
"test"
]
}
]
}

The "environ" part of this build file declares which environment variables are passed through to the underlying commands. In this case, only PATH is passed through.

There is just one build rule, and it's named c/test in this example. The inputs include the one C file (test.c), as well as two entire directories of header files (localhdrs and hdrs). The build command for this rule is very simple: it invokes gcc with all of the supplied input files, and has it build the final executable directly.

With the build rules set up like this, any change to any of the declared inputs will cause a rebuild to happen. For example, here is what happens in an initial build of the tool:


$ blcache c/test
Started building c/test.
Output from building c/test:
gcc -Ilocalhdrs -Ihdrs -o test test.c

$ ./target/c/test
Limit on threads: 10

After adding syslimits.h to the localhdrs directory, the entire component gets rebuilt, because the localhdrs input is considered to have changed:


$ blcache c/test
Started building c/test.
Output from building c/test:
gcc -Ilocalhdrs -Ihdrs -o test test.c

$ ./target/c/test
Limit on threads: 500

As a weakness of this approach, though, any change to any C file or any header file will trigger a rebuild of the entire component.

Method 2: Preprocess as a separate build step

Reasonable people disagree about how fine-grained of build rules to use for C, so let me describe the fine-grained version as well. This version can rebuild more incrementally in certain scenarios, but that benefit comes at the expense of a substantially more complicated build graph. Then again, most developers will never look at the build graph directly, so there is some argument for increasing the complexity here to improve overall productivity.

The key idea with the finer-grained dependencies is to include a separate build step for preprocessing. Here's a build file to show how it can be done:


{
"environ": [
"PATH"
],
"rules": [
{
"commands": [
"gcc -c -o test.o target/c/preproc/test.i"
],
"inputs": [
"c/preproc/test:test.i"
],
"name": "c/object/test",
"outputs": [
"test.o"
]
},

{
"commands": [
"gcc -Ilocalhdrs -Ihdrs -E -o test.i test.c"
],
"inputs": [
"test.c",
"localhdrs",
"hdrs"
],
"name": "c/preproc/test",
"outputs": [
"test.i"
]
},

{
"commands": [
"gcc -o test target/c/object/test.o"
],
"inputs": [
"c/object/test:test.o"
],
"name": "c/test",
"outputs": [
"test"
]
}
]
}

This file has three rules in it that chain together to produce the final output file. When you build the test executable for the first time, all three rules will be executed:


$ blcache c/test
Started building c/preproc/test.
Output from building c/preproc/test:
gcc -Ilocalhdrs -Ihdrs -E -o test.i test.c

Started building c/object/test.
Output from building c/object/test:
gcc -c -o test.o target/c/preproc/test.i

Started building c/test.
Output from building c/test:
gcc -o test target/c/object/test.o

$ ./target/c/test
Limit on threads: 10

First, the test.c file is preprocessed, yielding test.i. Second, the test.i file is compiled to test.o. Finally, test.o is linked into the final test executable.

Adding the new syslimits.h file behaves as expected, causing the full chain of recompiles.


$ blcache c/test
Started building c/preproc/test.
Output from building c/preproc/test:
gcc -Ilocalhdrs -Ihdrs -E -o test.i test.c

Started building c/object/test.
Output from building c/object/test:
gcc -c -o test.o target/c/preproc/test.i

Started building c/test.
Output from building c/test:
gcc -o test target/c/object/test.o

$ target/c/test
Limit on threads: 500

Modifying an irrelevant header file, on the other hand, only causes the precompilation step to run. Since the precompilation yields the same result as before, rebuilding stops at that point.


$ touch localhdrs/irrelevant.h
$ blcache c/test
Started building c/preproc/test.
Output from building c/preproc/test:
gcc -Ilocalhdrs -Ihdrs -E -o test.i test.c

Using cached results for c/object/test.
Using cached results for c/test.

It's not shown in this example, but since each C file is compiled individually, a change to a C file will only trigger a rebuild of that one file. Thus, the technique here is fine-grained in two different ways. First, changes to one C file only trigger a recompile of that one file. Second, changes to the H files only trigger preprocessing of all C files, and then only compilation of those C files that turn out to be affected by the H files that were changed.

By the way, there's a trick here that generalizes to a variety of cached computations. If you want to add a cache for a complicated operation like a C compile, then don't try to have the operation itself be directly incremental. It's too error prone. Instead, add a fast pre-processing step that accumulates all of the relevant inputs, and introduce the caching after that pre-processing step. In the case of this example, the fast pre-processing step is, well, the actual C preprocessor.

Coda

Before realizing the problem with C compilation, I used C as an example of why you might want to break the rules a little bit about the most strict and simplistic version of a build cache. However, now it seems to me that you find the best set of build rules if you strictly adhere to a build-cache discipline. I'm sorry, build cache. I should never have doubted you.

Posted in Blogroll

Standard build rules for C are unreliable

The standard way of integrating C into a build system is to use automatic dependencies generated from the compiler. Gcc and Clang can emit a list of the header files they read if you run them with the -M option. Visual Studio can do it as well, using the /showIncludes option. What I will call the "standard approach" in this post is to use the dependencies the user explicitly declared, and then to augment them with automatic dependencies generated by options like -M or /showIncludes.

Until a few years ago, I just took this approach as received wisdom and didn't think further about it. It's a neat trick, and it works correctly in the most obvious scenarios. Unfortunately, I have learned that the technique is not completely reliable. Let me share the problem, because I figure that other people will be interested as well, especially anyone else who ends up responsible for setting up a build system.

The root problem with the standard approach is that sometimes a C compile depends on the absence of a file. Such a dependency cannot be represented and indeed goes unnoticed in the standard approach to automatic dependencies. The standard approach involves an "automatic dependency list", which is a file listing out the automatically determined dependencies for a given C file. By its nature, a list of files only includes files that exist. If you change the status of a given file from not existing, to existing, then the standard approach will overlook the change and skip a rebuild that depends on it.

To look at it another way, the job of a incremental build system is to skip a compile if running it again would produce the same results. Take a moment to consider what a compiler does as it runs. It does a number of in-memory operations such as AST walks, and it does a number of IO operations including reading files into memory. Among those IO operations are things like "list a directory" and "check if a file exists". If you want to prove that a compiler is going to do the same thing on a second run as it did on the first, then you want to prove that those IO operations are going to do the same thing on a second run. That means all of the IO operations, though, not just the ones that read a file into memory.

Such a situation may seem exotic. At least one prominent source has declared that the standard approach is "correct" up to changes in the build command, which suggests to me that the author did not consider this scenario at all. It's not just a theoretical problem, though. Let me show a concrete example of how it can arise in practice.

Suppose you are compiling the following collection of files, including a single C file and two H files:


// File test.c
#include <stdio.h>
#include "syslimits.h"
#include "messages.h"

int main() {
printf("%s: %d\n", MSG_LIMIT_THREADS, LIMIT_THREADS);
}

// File localhdrs/messages.h
#define MSG_LIMIT_THREADS "Limit on threads:"

// File hdrs/syslimits.h
#define LIMIT_THREADS 10
Using automatic dependencies, you set up a Makefile that looks like this:

CFLAGS=-Ilocalhdrs -Ihdrs

test.o test.d : test.c
gcc $(CFLAGS) -M test.c > test.d
gcc $(CFLAGS) -c test.c

test: test.o
gcc -o test test.o

-include test.d

You compile it and everything looks good:


$ make test
gcc -Ilocalhdrs -Ihdrs -M test.c > test.d
gcc -Ilocalhdrs -Ihdrs -c test.c
gcc -o test test.o
$ ./test
Limit on threads: 10
Moreover, if you change any of the input files, including either of the H files, then invoking make test will trigger a rebuild as desired.

$ touch localhdrs/messages.h
$ make test
gcc -Ilocalhdrs -Ihdrs -M test.c > test.d
gcc -Ilocalhdrs -Ihdrs -c test.c
gcc -o test test.o

What doesn't work so well is if you create a new version of syslimits.h that shadows the existing one. Suppose you next create a new syslimits.h file that shadows the default one:


// File localhdrs/syslimits.h
#define LIMIT_THREADS 500

Make should now recompile the executable, but it doesn't:


$ make test
make: 'test' is up to date.
$ ./test
Limit on threads: 10

If you force a recompile, you can see that the behavior changed, so Make really should have recompiled it:


$ rm test.o
$ make test
gcc -Ilocalhdrs -Ihdrs -M test.c > test.d
gcc -Ilocalhdrs -Ihdrs -c test.c
gcc -o test test.o
$ ./test
Limit on threads: 500

It may seem picky to discuss such a tricky scenario as this one, with header files shadowing other header files. Imagine a developer in the above scenario, though. They are doing something tricky, yes, but it's a tricky thing that is fully supported by the C language. If this test executable is part of a larger build, the developer can be in for a really difficult debugging exercise to try and understand why their built executable is not behaving the way that's consistent with the source code. I dare say, it is precisely such tricky situations where people rely the most on their tools behaving in an intuitive way.

I will describe how to set up better build rules for this scenario in a followup post.

Posted in Blogroll

Android N – Security with Self Signed Certificates

If you are a good developer you are securing your services with SSL encryption. Unless you have put in a lot of effort, local testing still uses the good old fashioned self signed certificate and just click through the warning window of shame.

Screenshot from 2016-05-04 13-04-42

This is great until you are writing a RESTful service to be consumed by something which isn’t a browser. If you are an Android developer you have probably come across blog posts (or the official Android docs) encouraging you to make your own Trust Manager to accept your certificate or, worse, disable certificate checking altogether! However, Android N has come to the rescue with new security configuration features.

Using Self Signed Certificates with Android N

To use a self signed certificate you need to

  1. Add a meta-data tag to your AndroidManifest.xml which points to a security configuration xml file
  2. Add to your xml resources directory the security configuration file
  3. Download your self signed certificate to your project

Edit AndroidManifest.xml

I’ve added in my projects the following code to Android Manifest’s application element

<meta-data android:name="android.security.net.config"
               android:resource="@xml/network_security_config" />

This code just informs Android that the configuration file is found in res/xml/network_security_config.xml.

Creating the Network Security Config

The full documentation for the network security files covers a lot more than our use case for a self signed certificate. It is well worth a read to understand what is being done.

Here is my XML file to load my certificate from the raw directory. I have it named server_aerogear_dev, but the file name is irrelevant. What matters is that the common name in the certificate file matches the domain name of the server. I am pretty sure that this also works with IP addresses, but I haven’t tested it.

<?xml version="1.0" encoding="utf-8"?>
<network-security-config>
    <base-config>
        <trust-anchors>
            <certificates src="@raw/server_aergear_dev"/>
        </trust-anchors>
    </base-config>
</network-security-config>

Downloading the certificate

You can download the certificate to the raw directory in your source using your web browser or using the command line.

cd app/src/main/res/raw;
echo -n | openssl s_client -connect server.aerogear.dev:8443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > server_aerogear_dev
// Credit to SO : http://serverfault.com/questions/139728/how-to-download-the-ssl-certificate-from-a-website

Replace the name of the server and the port with configuration appropriate to you.

Final Notes

This is a very simple example of a new feature from Android N. This may change or go out of date. However, this gives us a simple was to manage security and it ALSO works within Android’s build flavor system. Take a look, and stay safe.

Posted in Blogroll

My Manager Hat vs. My Worker Hat

I mentioned a while back that one of the podcasts that I've been listening to is Simple Programmer, by John Sonmez. He gave some advice to a listener, and I've had it rolling around in my head since I heard it from Episode 120: Do I Have To Be An Entrepreneur To Be Successful. Just recently I've started to walk in that piece of advice.

I may get some of the particulars incorrect, but the overall gist will be in the spirit of the exchange.

John told the guy that to make progress in his extra goals, he needed to have the ability to wear two hats.  As I remember them, they were:
  1. A Manager Hat 
  2. A Worker Hat (I lovingly call this one the Doer or Executor Hat so if you see me use those alternate forms you’ll know to what personality I’m referring to). 




The Manager Hat:

This is the hat that you wear when you're deciding what should be accomplished by your Worker. So effectively, what he was saying is that you need to take some time to plan out your week. Be deliberate about setting goals for what would get accomplished during that week so you can keep sight of them. I've chosen Sunday night at 9:30 for this hat to adorn my head.




The Worker Hat:

The other hat he mentioned you needed to pick up is a Worker Hat (or a Doer Hat). This is the hat you put on once the Manager Hat comes off. You wear this hat for the rest of the week. Once this hat comes on you simply execute the plan unquestioningly. You don’t deviate from it, you just simply do what your Manager has planned for your Doer.

Once you get through the week, wash, rinse, and repeat for the following week.



In Practice (as I have experienced it thus far):

As I have tried to operate in this, I’ve discovered that my Manager always has unrealistic expectations of what my Worker can accomplish. Big surprise right? That’s often the case in the real world, outside of this little dual personality experiment that I’m doing. So what does that really mean you may ask. Well, for me it’s meant that what my Manager has scheduled for a particular day might not be what my Worker has been able to accomplish for that day. And there isn’t anything scheduled for this thing that my Worker hasn’t finished on the next day coming up.

Sooooo, my Worker had to negotiate with my Manager. The Worker had to tell the Manager, “Hey dude, I’m not done. I haven’t been slacking, it’s just that it’s taking me longer… or life has happened and inserted itself in the middle of your pre-scheduled routine, thus disrupting the schedule."  If you’re anything like me, you’re trying to accomplish these things in the midst of working at your day job, raising a family, and potentially working on these things with other people. Any one of those three things tend to try and get in the middle of your Worker accomplishing what your Manager has laid out. It happens, we’re human, and plans may need to change.

So how have my Manager and Worker adjusted expectations of what will get accomplished. Well, the Worker starts out the week working diligently on those things scheduled by the Manager. And if the Worker isn’t able to complete a day's tasks, then the unfinished task/s get pushed to the next day by the Worker, and the things that were scheduled for the next day get pushed accordingly through the week. The Manager has placed a bucket at the end of the week to catch anything that gets pushed completely out of the week, and things are pulled from that bucket for scheduling the next week.

This way, the Worker has some sense of continuity and doesn’t get disgruntled with not having completed anything at the end of the week, but has spent it thrashing about trying to accomplish everything (when that might not be possible). It’s important for progress and the Worker’s feelings and sense of accomplishment to complete things and get them behind him. If the schedule needs to be adjusted then so be it, the Manager needs to compromise.

This has made the things that I've been trying to accomplish more concrete. And by "more concrete" I mean that there are very real time slots for work to slip into, and expectations on myself that those things scheduled (and only those things scheduled) should be completed.

Life is going to happen. Your plans will get disrupted. You need to be able to adjust, and in the adjustment not loose focus on what your goals are.




In the closing, figuring out the balance between your Manager Hat and your Worker Hat can be Detective work for a while. But in the end you can find some happy medium and hopefully some cadence as to how much you’ll be able to accomplish in your upcoming week.
Posted in Blogroll

Two little things I wish Java would add

When geeking out about language design, it's tempting to focus on the things that require learning something new to even understand how it works. SAM types require understanding target typing, and type members require understanding path-dependent types. Fun stuff.

Aside from these things that are fun to talk about over beers, I really wish Java would pick up a few things from Scala that are just plain more convenient.

Multi-line string literals

A great way to structure a unit test is to feed in a chunk of text, run some processing that you want to verify, convert the actual output to text, and then compare it against another chunk of text that's included in the test case. Compared to a dense string of assertEquals calls, this testing pattern tends to be much easier to read and understand at a glance. When such a test fails, you can read a text diff at a glance and possibly see multiple different kinds of failure that happened with the test, rather than stare into the thicket of assertEquals calls and try to deduce what is being tested by the particular one that failed.

The biggest weakness of this style is very mundane: it's hard to encode a multi-line chunk of text in Java. You have to choose between putting the text in an external file, or suffering through strings that have a lot of "\n" escapes in them. Both choices have problems, although the latter option could be mitigated with a little bit of IDE support.

In Scala, Python, and many other languages, you can write a multi-line string by opening it with triple quotes (""") rather than a single quote mark ("). It's a trivial feature that adds a lot to the day to day convenience of using the language.

As one trick to be aware of, it's important to help people out with indentation when using triple quotes. In Scala, I lobbied for the stripMargin approach to dealing with indentation, where you put a pipe on each continuation line, and anything up to the pipe is considered leading indentation and removed. In retrospect, I wish I had pushed for that to simply be the default behavior. If you need to insert a literal continuation character, you can always write it twice. Making people write stripMargin on almost every multi-line string is a form of boilerplate.

Case classes

There are philosophers who disagree, but I find them a little too philosophical for my taste. Sometimes you really want to write a class that has no hidden internal state. Sometimes it would be a breach of the API to retain any internal state, or to implement the public API as anything other than plain old final fields. Some motivating examples are: tiny types, data structure nodes such as links in a linked list, and data-transfer objects.
In such a case, it takes a tremendous amount of code in Java to implement all the odds and ends you would really like for such a class. You would really like all of the following, and they are all completely mechanical:
  • Constructors that copy their parameters to a series of final fields.
  • A toString() implementation.
  • Comparison operations: equals(), hashCode(), and compareTo(). Ideally also helpers such as isLessThan().
  • Copy constructors that make a new version by replacing just one of the fields with a new value.
The equals() method is particularly painful in Java because there is a lot of advice going around about how to write them that is not consistent. I've been drug into multi-day debates on equals() methods where people cite things I published in the past to try and use against me; I'm pretty sure I meant what I said then and mean what I say now. Above all, though, I'd rather just have a reasonable equals() method and not spend time talking about it.
Posted in Blogroll

Spring Boot with JSPs using Undertow

This is a follow-up to my previous post Spring Boot with JSPs in Executable Jars.

Undertow is another alternative for using an embedded container with Spring Boot. You can find general information in the Spring Boot reference guide chapter Use Undertow instead of Tomcat. While I was working on updating the Spring Boot documentation regarding the JSP support for Tomcat, I noticed the following line in the reference guide for Spring Boot 1.3.3:

"Undertow does not support JSPs."

Being a good citizen, I dug a little deeper and discovered the Undertow JSP sample application by Chris Grieger. It turns out that Undertow has indeed JSP support by using jastow, which is a Jasper fork for Undertow. The key was to adapt the Undertow JSP sample application for Spring Boot. Doing so was actually fairly straightforward. The actual Undertow configuration uses Spring Boot`s EmbeddedServletContainerCustomizer:


final UndertowDeploymentInfoCustomizer customizer = new UndertowDeploymentInfoCustomizer() {

@Override
public void customize(DeploymentInfo deploymentInfo) {
deploymentInfo.setClassLoader(JspDemoApplication.class.getClassLoader())
.setContextPath("/")
.setDeploymentName("servletContext.war")
.setResourceManager(new DefaultResourceLoader(JspDemoApplication.class))
.addServlet(JspServletBuilder.createServlet("Default Jsp Servlet", "*.jsp"));

final HashMap<String, TagLibraryInfo> tagLibraryInfo = TldLocator.createTldInfos();

JspServletBuilder.setupDeployment(deploymentInfo, new HashMap<String, JspPropertyGroup>(), tagLibraryInfo, new HackInstanceManager());

}
};

The full source is available in the JspDemoApplication class. The main issue is more or less the retrieval and configuration of the used Taglibraries. The Undertow JSP sample provides the TldLocator class, which does the heavy lifting. For our example, I am adapting that class so that it works in the context of Spring Boot. In Spring Boot we are dealing with über-Jars, meaning the resulting executable jar file will contain other jar files representing its dependencies.

Spring provides some nifty helpers to retrieve the needed Tag Library Descriptors (TLD) files. In TldLocator#createTldInfos I use a ResourcePatternResolver, specifically a PathMatchingResourcePatternResolver with a location pattern of classpath*:**/*.tld.


final URLClassLoader loader = (URLClassLoader) Thread.currentThread().getContextClassLoader();

final ResourcePatternResolver resolver = new PathMatchingResourcePatternResolver(loader);
final Resource[] resources;
final String locationPattern = "classpath*:**/*.tld";

try {
resources = resolver.getResources(locationPattern);
}
catch (IOException e) {
throw new IllegalStateException(String.format("Error while retrieving resources"
+ "for location pattern '%s'.", locationPattern, e));
}


Important

Don’t forget the asterix right after classpath. The classpath*: allows you to retrieve multiple class path resources with the same name. It will also retrieve resources across multiple jar files. This is an extremely useful feature. For more information please see the relevant JavaDocs for PathMatchingResourcePatternResolver.

Once we have the TLD resources, they will be parsed and ultimately used to create a collection of org.apache.jasper.deploy.TagLibraryInfo. With those at hand, we create a JSP deployment for Undertow using the DeploymentInfo and the TagLibraryInfo collection.


final HashMap<String, TagLibraryInfo> tagLibraryInfo = TldLocator.createTldInfos();
JspServletBuilder.setupDeployment(deploymentInfo, new HashMap<String, JspPropertyGroup>(), tagLibraryInfo, new HackInstanceManager());

And that’s it. Simply build and run the application and you should have a working JSP-based application.


$ mvn clean package
$ java -jar jsp-demo-undertow/target/jsp-demo-undertow-1.0.0-BUILD-SNAPSHOT.jar

In your console you should start seeing how the application starts up.



Once started, open your browser and go to the following Url http://localhost:8080/.


You can find the full source-code for this sample at https://github.com/ghillert/spring-boot-jsp-demo
Posted in Blogroll

When you inflate the balloon, can you do it in the form of a kitten?


Posted in Blogroll

Switching maven settings.xml by name

I thought I’d throw this out there for anyone that might find this convenient:

As Maven users know, local maven settings reside in $HOME/.m2/settings.xml

However, sometimes I use some settings/config for when I’m working on open source projects that assume Maven central + Sonatype signature defaults, and I use different ones when working on work (closed source) projects that assume our company Artifactory server and other permissions.

Ordinarily you’d modify the settings.xml file every time you wanted to switch settings – comment out one profile, enable another, comment out (or uncomment) the <mirror> setting, whatever.  This is a big pain for me during the day when I switch back and forth, so I wrote a bash script that allows me to easily switch between configs using symbolic links:

https://gist.github.com/lhazlewood/0ffbbb6d3d043c147710

To use this you would need to:

  1. Download the above script and make it executable:
    chmod u+x m2
    
  2. Ensure this script lives in a location in your $PATH.$HOME/bin and that dir is in my $PATH.

  3. In your $HOME/.m2 directory, set up one or more maven settings.xml files, with the following naming convention:

    name1.settings.xml
    name2.settings.xml
    ...
    nameN.settings.xml
    

    where nameN is the name of the environment you wish to reflect.

    For example, in my $HOME/.m2 directory, I currently have 2 files:

    opensource.settings.xml
    work.settings.xml
    
  4. Create a symbolic link named settings.xml that points to the one you want to be in effect:
    cd $HOME/.m2
    ln -s work.settings.xml settings.xml
    

After this setup is done and the executable m2 file is in your path, you can do the following:

  1. See which maven settings are in effect:
    m2
    
  2. Change your settings.xml to point to opensource.settings.xml:
    m2 opensource
    
  3. Change settings.xml to point to work.settings.xml:
    m2 work
    
Posted in Blogroll

RHMAP and Google Accounts in Android

The Red Hat Mobile Application Platform (RHMAP) has strong authentication and authorization mechanisms baked into its Auth Policy system. Android has deep integration with Google’s ecosystem which provides many easy mechanisms for authorizing services to act on a user’s behalf. Out of the box RHMAP allows for connecting to a Google account using OAuth and a web view, but a better user experience is using Google’s Android account picker. To enable this integration in RHMAP we have to use a MBaaS Auth Policy.

Prerequisites

This post should be informative to anybody who wishes to learn more about RHMAP; however, you will have the most benefit if you have access to a RHMAP instance and have read through the Getting Started documentation. If you do not have access to a instance of RHMAP, you may sign up for a free one at openshift.feedhenry.com.

Additionally you will need a Google account and Android emulator or device with Google’s APIs set up.

Demo

You can view an example of this integration in my FehBot video. The Android portion of this post will refer to the code in the application.

Creating an MBaaS Auth Policy

Create a blank MBaaS Service

Select “Services & APIs” from the top navigation. Click “Provision MBaaS Services/API”

CReate_MBaaS_1

Select “Choose” next to the item “New mBaaS Service”.

CReate_MBaaS_2

Name the service, click “Next”, ensure you are using the “Development” environment, and finally click “Deploy”. The service should deploy and you should have a green bar.

CReate_MBaaS_3

You are now ready to set up the Auth Policy.

Setup the Auth Policy

Select “Admin” from the top navigation and then “Auth Policies” from the 6 boxes which appear. Create_auth_policy_1

Click “Create” on the next screen to begin setting up an Auth Policy.
CReate_Auth_Policy_2

Name the Policy and select “MBaaS Service” as the “Type” under “Authentication. From the “Service” drop down select the service you created in the previous step. For “Endpoint” our MBaaS service will use “/auth/init”. Finally select for your “Default Environment” the value “Development.
CReate_Auth_Policy_3

Scroll down to the bottom of the page and click “Create Auth Policy”.

Implementing the MBaaS

I have created a MBaaS Service for us to use. It implements the server side token validation that Google recommends in its documentation. You should be able to copy this project into your MBaaS’s source and redeploy it.

You may wish to limit which Cloud applications can access your MBaaS services in the “Service Settings” section of the MBaaS “Details” page.

Create_MBaas_4

/auth/init

The /auth/init route will consume tokens from the Android device and set up user accounts in RHMAP. The code should be easy ish to follow along. The most important part is that we return a userId value in the json which we can use to look up the user’s session informaiton.

/list/:session

The route /list/:session can be used by Cloud applications to fetch a user’s account information which is created and saved after a call to “/auth/init”.

Android Integration

In order to integrate with Android, please follow Google’s Guide for instructions on how to setup an Android account and get an IdToken from a sign in. The FehBot Android client contains a working example.

Once you have a IdToken you can use FH.buildAuthRequest to perform the sign-in with RHMAP. For the three parameters us the Auth Policy name you assigned during “Setup the Auth Policy”, the IdToken you retrived from Google, and an empty string for the final parameter. Here is an example from the FeHBot app.

Caveats

As per the RHMAP Authentication API if you use this you will have to manually verify your sessions in your application yourself. The built in verification methods will not work.

Conclusion

As you can see, it is easy to add a third party authentication mechanism to RHMAP. The principles in this post can be applied to many other authentication providers and client platforms.

Posted in Blogroll

Spring Boot with JSPs in Executable Jars


This part one of a multi-part series of blog posts on using JSPs with Spring Boot. Please find the second part here: Spring Boot with JSPs using Undertow.

Introduction

As you may know, I am a co-organizer for the DevNexus conference, the second-largest enterprise Java conference in North-America in Atlanta, GA. Together with Summers Pittman, I also maintain the Spring-based web-application that drives the website, schedule, call-for-paper (CFP) process and nowadays ticket registrations as well.


Goal

When we started planing for DevNexus 2016, I wanted to modernize the DevNexus application. Specifically, I planned to improve the underlying infrastructure of the app.
The goal was to move away from a stand-alone Tomcat-based servlet-container, which we had been using for the past couple of years. We endured several minutes of down-time whenever a new version of the app was deployed. Sometimes, the Tomcat instance or the server itself gave us grief. Furthermore, I longed for the ability to make blue/green deployments.
Therefore, the goal emerged to move the application over to a Platform as a Service (PaaS) offering, specifically Pivotal Web Services (PWS). I did not want to worry any longer about infrastructure issues, and blue/green deployments would be a breeze to accomplish using PWS.
In order to make this all happen, it became apparent, that migrating the application to Spring Boot would help in that endeavor. Luckily the application was generally architected in a way that made the migration to Spring Boot fairly straightforward. The migration also would simplify things greatly overall as we could take full advantage of Spring Boot’s defaults and also remove some duplicate functionality that was already baked into Spring Boot.
One main sticking point, though, was the used view technology. The DevNexus application has been using JavaServer Pages (JSP) for several years, and we accumulated a non-trivial amount of them. Ultimately, the plan is to migrate the user interface (UI) to a Single Page Application (SPA) but for the 2016 conference (February) that intent was unattainable due to time constraints.
Therefore, the whole migration was a bit in perils initially. As of the current version of Spring Boot at the time of this blog post 1.3.3, the reference guide states:
JSPs should be avoided if possible, there are several known limitations when using them with embedded servlet containers.
The reference guide then goes on to provide a list of JSP limitations in chapter 27.3.5. Specifically it states that:
An executable jar will not work because of a hard coded file pattern in Tomcat.
What a bummer…


Solution

Just to recap my requirement, I want to serve JSPs out of my classpath so that I can create executable Jar files. Basically, Eliminate the webapps folder.

Note

An interesting aspect of this is, that one can compose web applications out of multiple JARs, each possibly containing JSPs that are automatically served.
Unfortunately, taking my Maven-based project, putting your JSPs into e.g. src/main/resources/public or src/main/resources/static does not work. While reading the JSR-245 JavaServer™ Pages 2.1 Specification as well as the following in interesting blog post titled Serving Static Content with Servlet 3.0, it became apparent that I should also be able to store static resources in the META-INF/resources directory. Heureka it worked!
So the simple thing to remember is to store your JSPs in a folder like /src/main/resources/META-INF/resources/WEB-INF/jsp and you’re good to go (Plus some minor configuration around). To make things easy, lets go over a little example project.


Sample Project


Spring Initializr

The best way to start a Spring Boot project is to head over to http://start.spring.io/. Using Spring Initializr underneath, the website lets you customize and create Spring Boot starter projects. For our requirement, we want to create a simple web project.

Create starter project using spring initializr

Selecting web enables Full-stack web development with Tomcat and Spring MVC. Now you can press the Generate Project button, which will start the download of a Zip file containing your customized project.

Note

Instead of following the individual steps, you can also download the fully configured sample project from GitHub. Just clone the Demo Project using:


$ git clone https://github.com/ghillert/spring-boot-jsp-demo.git
$ cd spring-boot-jsp-demo


Unzip the project to a directory of your choosing.


Add Maven Dependencies

In order to enable JSP support we need to add a few dependencies to our new project in pom.xml.


<dependency>
<groupId>org.apache.tomcat.embed</groupId>
<artifactId>tomcat-embed-jasper</artifactId>
</dependency>
<dependency>
<groupId>javax.servlet</groupId>
<artifactId>jstl</artifactId>
</dependency>

Define the location of your JSP templates

Next we need to define the template prefix and suffix for our JSP files in application.properties. Thus add:


spring.mvc.view.prefix=/WEB-INF/jsp/
spring.mvc.view.suffix=.jsp




Important

Keep in mind that we will ultimatively, place the JSP templates under src/main/resources/META-INF/resources/WEB-INF/jsp/


Create a Spring Web Controller

Create a simple web controller:


package com.hillert.controller;

import org.springframework.stereotype.Controller;
import org.springframework.ui.Model;
import org.springframework.web.bind.annotation.RequestMapping;

@Controller
public class HelloWorldController {

@RequestMapping("/")
public String helloWorld(Model model) {
model.addAttribute("russian", "Добрый день");
return "hello-world";
}

}

Create the JSP Template

Next, create the corresponding JSP file hello-world.jsp in the directory src/main/resources/META-INF/resources/WEB-INF/jsp/:


<%@ page language="java" contentType="text/html; charset=UTF-8" pageEncoding="UTF-8" %><%
response.setHeader("Cache-Control","no-cache");
response.setHeader("Pragma","no-cache");
response.setHeader("Expires","0");

%><%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %>
<%@ taglib uri="http://java.sun.com/jsp/jstl/fmt" prefix="fmt" %>
<%@ taglib uri="http://java.sun.com/jsp/jstl/functions" prefix="fn" %>

<%@ taglib prefix="spring" uri="http://www.springframework.org/tags"%>
<%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %>

<c:set var="ctx" value="${pageContext['request'].contextPath}"/>
<html>
<body>
<h1>Hello World - ${russian}</h1>
</body>
</html>

Run the Sample Application

Now it is time to run the application - execute:

$ mvn clean package
$ java -jar target/jsp-demo-0.0.1-SNAPSHOT.jar


Conclusion

In this blog post I have shown how easy it is to use JSP templates with Spring Boot in executable Jars by simply putting your templates into src/main/resources/META-INF/resources/WEB-INF/jsp/.
While JSPs are often touted as being legacy, I see several reasons why they stay relevant today (2016):
  • You need to migrate an application to Spring Boot but have an existing sizable investment in JSP templates, that can’t be migrated immediately (My use-case)
  • While Single Page Applications (SPA) are all the rage, you may have use-cases where the traditional Spring Web MVC approach is still relevant
  • Even for SPA scenarios, you may still use dynamically-created wrapper pages (e.g. to inject data into the zero-payload HTML file)
  • Also JSP are battle-tested in large scale environments, e.g. at Ebay
  • Even with alternative frameworks, you may run into issues
In any event, I hope this expands your toolbox when using Spring Boot. There is simply no reason why you shouldn’t enjoy the benefits of Spring Boot to the fullest extent permissible by law. Remember, Make JAR, not WAR.
Posted in Blogroll
AJUG Meetup
'system_date', 'orderby' => 'meta_value', 'posts_per_page' => 1, 'order' => 'ASC', 'category__in' => array($cat_id))); if( $latest_cat_post->have_posts() ) : while( $latest_cat_post->have_posts() ) : $latest_cat_post->the_post(); ?>

$value ) { echo "$value"; } ?>

$value ) { echo "$value"; } ?>
Location:
$value ) { $special_location = $value; } if ( $special_location != "") {?>

Cobb Galleria Centre

Holiday Inn Atlanta-Perimeter/Dunwoody 4386 Chamblee Dunwoody Road, Atlanta, GA (map)

AJUG Tweets

Follow @atlantajug on twitter.

Recent Jobs