Blog

The long race

Finishing a side project takes a tremendous amount of persistence, even if the tooling you use provides a lot for you. Keep going, just keep going, is what I have to routinely say to myself. It's hard though, and requires a lot of deep breaths on my part.

Posted in Blogroll

The long race

Finishing a side project takes a tremendous amount of persistence, even if the tooling you use provides a lot for you. Keep going, just keep going, is what I have to routinely say to myself. It's hard though, and requires a lot of deep breaths on my part.

Posted in Blogroll

Goodbye, Spark Plug

Don't take me wrong. I know a dog is just a dog, and a pet is just a pet. There are people reading this who have cancer, and there are some who have outlived their human children. On the scale of life challenges, I've just had maybe a 3/10.

Still, I would like to write a few words. It's a way to organize my thoughts, and a way to say goodbye. I promise the next post will be about programming or law or identity or the web, but that all seems rather dry to me today.

As all you pet owners know, you get a little Pavlovian jolt each time you help out your little ward and they reward you for it. For example, when they greet you at the door and run in circles. Or, when they learn your gait well enough to jump on each leg in time and then jump to the other one before you pick that foot up. When they're cold, and you blanket them up, and they let out a deep sigh of contentment. When there's a burr in their foot, and they plaintively hold it out so that someone with thumbs can do something about it.

Over time it really adds up. You become attuned to where you expect them to be when you walk into a room. You gain a second sense about which ruffles in a couch blanket have a dog under them. You expect if you plink through a few measures of a waltz, that you'll see two brown eyes peek around the corner to see what you're doing. After 18 years of that and then nothing, you are left with a lot of little voids that add up to one great big void.

Some animals go and hide when they become deathly sick, but this one did not. In his last hours he came to me to fix it. Dog or no, it was crushing to see such hope and confusion, yet so little ability to do anything about it.

To anyone out there facing this kind of question, let me tell you that I feel no unease at all about the decision to eschew blood samples, fluid IVs, antibiotics, and I didn't even ask what else to try and give him a little more time. I keep thinking, he was 18, and kidneys don't get better, and he had multiple other problems, anyway. Indeed, what I really wish I could go back in time on is delaying euthanasia for too long. I even had my mind made up, and I went to a 24-hour vet to do it, but I backed down when confronted with a thorough vet that wanted to rehash the dog's entire medical history. I thought I could just take him to our regular vet the next day, but the sun never rose for him again. Yes, hindsight is 20/20, but I wish I had insisted.

Goodbye, Spark Plug. I hope we did well for you.


P.S. -- Mom, you are very optimistic to think we can get this plant to bloom every December. We'll give it a try!

Posted in Blogroll

Goodbye, Spark Plug

Don't take me wrong. I know a dog is just a dog, and a pet is just a pet. There are people reading this who have cancer, and there are some who have outlived their human children. On the scale of life challenges, I've just had maybe a 3/10.

Still, I would like to write a few words. It's a way to organize my thoughts, and a way to say goodbye. I promise the next post will be about programming or law or identity or the web, but that all seems rather dry to me today.

As all you pet owners know, you get a little Pavlovian jolt each time you help out your little ward and they reward you for it. For example, when they greet you at the door and run in circles. Or, when they learn your gait well enough to jump on each leg in time and then jump to the other one before you pick that foot up. When they're cold, and you blanket them up, and they let out a deep sigh of contentment. When there's a burr in their foot, and they plaintively hold it out so that someone with thumbs can do something about it.

Over time it really adds up. You become attuned to where you expect them to be when you walk into a room. You gain a second sense about which ruffles in a couch blanket have a dog under them. You expect if you plink through a few measures of a waltz, that you'll see two brown eyes peek around the corner to see what you're doing. After 18 years of that and then nothing, you are left with a lot of little voids that add up to one great big void.

Some animals go and hide when they become deathly sick, but this one did not. In his last hours he came to me to fix it. Dog or no, it was crushing to see such hope and confusion, yet so little ability to do anything about it.

To anyone out there facing this kind of question, let me tell you that I feel no unease at all about the decision to eschew blood samples, fluid IVs, antibiotics, and I didn't even ask what else to try and give him a little more time. I keep thinking, he was 18, and kidneys don't get better, and he had multiple other problems, anyway. Indeed, what I really wish I could go back in time on is delaying euthanasia for too long. I even had my mind made up, and I went to a 24-hour vet to do it, but I backed down when confronted with a thorough vet that wanted to rehash the dog's entire medical history. I thought I could just take him to our regular vet the next day, but the sun never rose for him again. Yes, hindsight is 20/20, but I wish I had insisted.

Goodbye, Spark Plug. I hope we did well for you.


P.S. -- Mom, you are very optimistic to think we can get this plant to bloom every December. We'll give it a try!

Posted in Blogroll

Npm, Bower, CI and the Network

Ok, I accepted the reality that building single-page applications (SPA) is better done using the tooling options embraced by the JavaScript community, rather than building those apps using purely Gradle or Maven (and respective plugins). The reason being that building a complete web-UI is rather involved and tools like Yeoman, Grunt (Gulp) and Bower provide quite a bit of useful tooling while also having a much larger user-base than the (less complete) options in the Java world.

So life was good. Everything builds. Of course we still need to integrate the app with our backend that provides the REST endpoints. Personally, I prefer it that developers can build the entire stack at once. Also, can we assume that every (Java) developer has Node/Npm intalled?

Luckily, there are some plugins available for Maven and Gradle that provide useful wrappers around Npm and Node:
Thus, with some trial and error you get a fairly portable build (Linux, Mac and Windows) that not only executes the Grunt build but also downloads and installs Node and its dependencies, Grunt, Bower etc.

You think you finally arrived...Things run mostly okay on the continuous integration (CI) server...mmh, wait... "Mostly" is causing some headaches. This is actually an area where I have some frustrations lately. Looks like the Maven Central in the node world is a tad more volatile than Maven central itself.

In the Java world you have 2 layers of protection that ensure that the CI server build process is fairly resilient to internet hick-ups. Heck, it would even build off-line (assuming no library dependency changes were done). First, you have your local repository of course, which in the case of Maven is typically ${user.home}/.m2/

Second, any serious CI environment would also use a dedicated repository manager that serves as a proxy to the outside world, so that for already retrieved dependencies you would not need to hit Maven Central or other 3rd party repositories.

With NPM you all of a sudden realize you are a tad back into the wild west. Not only do you have to consider NPM (Managing tooling dependencies) but Bower (managing JS/CSS dependencies) as well.

NPM actually provides npm-cache - and you see dependencies being cached in your home directory under ~/.npm/. But try to disable your network card...the eternal spinner is yours. It does not even seem to timeout.

You can go offline - kind of - using “npm install --cache-min 9999999 --no-registry” but it does not seem to support things the way Maven/Gradle does: Check the cache first, and only if the dependency does not exist fetch it remotely. See also: https://github.com/npm/npm/issues/2568

Another issue I encountered is with using Protractor for the E2E testing of my AngularJS application. You will usually use webdriver-manager to retrieve the necessary Selenium files/driver for your targeted Browser, e.g. Chrome. Since, I like to make the build as portable as possible, I do a post-install of the web-driver manager, which requires direct network access in package.json:

"scripts": {
  "postinstall": "node_modules/protractor/bin/webdriver-manager update"
}

Bower unfortunately, does its own caching approach which is configured in .bowerrc, e.g.:

"storage": {
"packages": ".bower_cache"
}

So what about using dedicated repository managers as proxy for NPM and Bower?

Artifactory provides support for NPM but not Bower. There is a feature request to support Bower in Artifactory, though. Nexus also has support for NPM. For Bower an open ticket exists.

Maybe people should start rallying behind web-jars more broadly ;-(



Posted in Blogroll

Npm, Bower, CI and the Network

Ok, I accepted the reality that building single-page applications (SPA) is better done using the tooling options embraced by the JavaScript community, rather than building those apps using purely Gradle or Maven (and respective plugins). The reason being that building a complete web-UI is rather involved and tools like Yeoman, Grunt (Gulp) and Bower provide quite a bit of useful tooling while also having a much larger user-base than the (less complete) options in the Java world.

So life was good. Everything builds. Of course we still need to integrate the app with our backend that provides the REST endpoints. Personally, I prefer it that developers can build the entire stack at once. Also, can we assume that every (Java) developer has Node/Npm intalled?

Luckily, there are some plugins available for Maven and Gradle that provide useful wrappers around Npm and Node:
Thus, with some trial and error you get a fairly portable build (Linux, Mac and Windows) that not only executes the Grunt build but also downloads and installs Node and its dependencies, Grunt, Bower etc.

You think you finally arrived...Things run mostly okay on the continuous integration (CI) server...mmh, wait... "Mostly" is causing some headaches. This is actually an area where I have some frustrations lately. Looks like the Maven Central in the node world is a tad more volatile than Maven central itself.

In the Java world you have 2 layers of protection that ensure that the CI server build process is fairly resilient to internet hick-ups. Heck, it would even build off-line (assuming no library dependency changes were done). First, you have your local repository of course, which in the case of Maven is typically ${user.home}/.m2/

Second, any serious CI environment would also use a dedicated repository manager that serves as a proxy to the outside world, so that for already retrieved dependencies you would not need to hit Maven Central or other 3rd party repositories.

With NPM you all of a sudden realize you are a tad back into the wild west. Not only do you have to consider NPM (Managing tooling dependencies) but Bower (managing JS/CSS dependencies) as well.

NPM actually provides npm-cache - and you see dependencies being cached in your home directory under ~/.npm/. But try to disable your network card...the eternal spinner is yours. It does not even seem to timeout.

You can go offline - kind of - using “npm install --cache-min 9999999 --no-registry” but it does not seem to support things the way Maven/Gradle does: Check the cache first, and only if the dependency does not exist fetch it remotely. See also: https://github.com/npm/npm/issues/2568

Another issue I encountered is with using Protractor for the E2E testing of my AngularJS application. You will usually use webdriver-manager to retrieve the necessary Selenium files/driver for your targeted Browser, e.g. Chrome. Since, I like to make the build as portable as possible, I do a post-install of the web-driver manager, which requires direct network access in package.json:

"scripts": {
  "postinstall": "node_modules/protractor/bin/webdriver-manager update"
}

Bower unfortunately, does its own caching approach which is configured in .bowerrc, e.g.:

"storage": {
"packages": ".bower_cache"
}

So what about using dedicated repository managers as proxy for NPM and Bower?

Artifactory provides support for NPM but not Bower. There is a feature request to support Bower in Artifactory, though. Nexus also has support for NPM. For Bower an open ticket exists.

Maybe people should start rallying behind web-jars more broadly ;-(



Posted in Blogroll

Entering the Tiny House Movement

The tiny house movement is real, and picking up quite a bit of steam.  And it’s awesome.

I, like most others, bought in to the ‘American Dream’ of buying my own house when I was 26: a 2500 square/foot home on 1/3rd of an acre – 4 bedroom, 2.5 baths, master on main, fenced-in back yard, blah blah.  It was a great house, but I was single.  I didn’t need that much space.  I traveled a lot for work.  Also, some years later, I ended up moving across the country twice for two different career paths.  Once from Atlanta to New York City, and then 4 years ago, from Atlanta to California.  Because of the uncertainty of my career path (and that I have no family of my own at the moment), owning a big house was just foolish in retrospect.  I ended up wasting a lot of money on mortgages and house upkeep that I’ll just never get back.

Living in Manhattan and now Silicon Valley, trying to buy a home in these environments is just ridiculous.  Homes that I wouldn’t pay $80k for anywhere else in the country regularly sell here for $650-$800k.  It is insane.

“But Les”, you might say “just move! Anyone foolish enough to pay that much on a mediocre home is just stupid.  I’m living in Wyoming and my 3000 square foot house only cost me $180k!”

Unfortunately such a reductionist view is not realistic for many like me.  I founded a Silicon Valley tech startup and there is just no way I can live away from my company (and note I said ‘startup': I have to budget myself like almost everyone else).  If you’re a banker, the center of your world is Manhattan – the benefits – and sometimes requirements – just require you to be there.  For a software architect/founder like myself, the center of my universe is Silicon Valley.  And in my case, I must be here. I don’t have much of a choice.  So those that dismiss expensive locations as foolish should take a look at the world and understand that not everyone wants to live the way they are living, and many don’t have a realistic choice.  (Yes, I know, we still actually have a choice to run away to live cheaply in the wilderness, but that’s not a choice many of us would ever want to choose.  I just like Indian delivery too much, among other conveniences!.  Ok moving on…).

I sold my house last year (December 2013), and thankfully got out from underneath it after the housing economy started to recover.  But it was close – if I chose to sell two years earlier, I could have been hurt by a negative value on the house due to the steep market hit in 2008.  That whole experience left a bad taste in my mouth.

Because of all of this, I wanted to see if I could build my next house myself and not have a mortgage.  Basically build it as I go, and pay as I go, and not be a slave to the bank again.  Being a fairly handy guy (with a former cabinet shop in my old 2-car garage), I thought this was something I could research and realistically execute.  I was thinking that I’d just buy some land, and build a ‘normal’ house as I go.  And that still may be a reality some day.

However, in doing this research, I stumbled on this concept of tiny homes, because they are inexpensive to build, energy efficient, and easy to do yourself.  Then I started seeing a lot of websites dedicated to tiny house living: everything from DIY solar power setups to interior design, to construction techniques, tiny houses on wheels (to avoid building code restrictions), etc.  Also interesting, I was really impressed at how much it felt like a community – people helping other people learn and offering help, advice, and positive open discussion forums.  This added an appeal all its own.

Because of this, I decided that I would like to build my own sometime (hopefully soon).  I’ve spent a lot of time researching the latest construction codes and techniques, as it has been about 10 years since I’ve done any significant general construction projects myself.  All in all, I feel like I have plenty of information to get started.

So the first step: I started to design my own tiny house on wheels on a 5th wheel trailer frame with the following dimensions:

  • 8 1/2 feet wide (102 inches – max allowed by state/interstate roads without a Wide Load permit and commercial drivers license)
  • 13 feet, 6 inches tall (max allowed by interstate roads across the whole country, although most Western states allow 14 feet)
  • 45 feet long (36 foot long base deck + 9 foot long upper deck/platform above the 5th wheel hitch)

As of January 1st, 2014, the state of California allows 5th wheel trailers to be up to 48 feet long (as long as the vehicle pulling the trailer + the trailer do not exceed 65 feet).

Ok, those are the size restrictions to work within, without requiring a wide-load permit or commercial driver help to move it.  But what about weight?

The reality is that, while the above dimensions are within state RV size limits, traditional stick frame construction on a heavy duty equipment trailer that length will most certainly push the total weight above 10,000 pounds.  And that poses a challenge.

In California, a standard Class C driver’s license (what almost everyone gets after their 16th birthday) only permits the driver to pull a travel trailer with a maximum 10,000 pound GVWR (Gross Vehicle Weight Rating).  Note that this is the weight rating of the trailer – how much weight the trailer is capable of supporting – and not the actual weight of the trailer!  This means that you could have a 10,000 pound GVWR trailer and pull 10,000 pounds of cargo and you’re ok.  But if you have a 10,001 (or greater) GVWR and pull only 1 pound of cargo, you’ll be in violation.  So the GVWR is what matters for California and driver’s licenses and not weight (until, I think, you get to 26,000 pounds or more, but we don’t have to worry about that).  For 5th wheel trailers, a Class C driver’s license in California permits up to 15,000 pound GVWR (again, rating here, not the actual weight is what is important).

So, to be safe, I’m planning on getting a non-commercial Class A driver’s license, which can allow me to tow up to – I believe – 26,000 pounds.  Once constructed, I expect the entire house to be anywhere from 13,000 to 16,000 pounds, well within that upper threshold (and towable by a Ford F-450).  Then I won’t need to pay anyone to move it any time I get the desire to do so.

Anyway, over the coming weeks and months, if I can make the time at all (most likely on a weekend day), I’ll be posting my trailer and house designs (all made using the very awesome – and free – SketchUp Make software program).

Some design decisions to pique your interest:

  • 5th wheel trailer with three 7,000 or 8,000 pound drop-axles, allowing for a 21,000 or 24,000 pound GVWR respectively.  (drop-axle = lower trailer deck for maximum vertical space utilization).
  • Each axle will (hopefully) have Timbren STI Air Ride air suspension.  This will increase the trailer cost, but I want crack-free seamless walls (MgO board) and a bathroom with real tile and glass!  The air suspension gives about the softest ride possible, allowing one to install things in the tiny house that most others could not have, like full tile setups, glass panels, and other semi-fragile things).
  • MgO board interior and exterior walls (maybe in the form of SIPs, maybe not – If I can get a higher R-value w/ spray foam, I’ll likely use MgO board + spray foam instead of MgO SIP panels)
  • Tiled 3′ x 5′ shower with frameless tempered glass panel and door (tempered glass = no fear of cracking during road travel)
  • 4′ x 30″ x 30″ deep Japanese soaking tub (aka ‘ofuro’).
  • 3 ‘bedrooms’ (2 lofts + one downstairs)
  • 8′ wide galley kitchen
  • LOTS OF WINDOWS!!!  (natural light really opens up small spaces, and this is a must for me).
  • Living room w/ reclining leather sofa + movie theater wall
  • Plenty of closet space
  • Full-sized stacked washer and dryer set
  • Radiant floor heating
  • Air conditioning
Posted in Blogroll

AeroGear diffsync preview


Posted in Blogroll

Is this the right server?

It's nice to see someone else reach the following conclusion:

"For those familiar with SSH, you should realize that public key pinning is nearly identical to SSH's StrictHostKeyChecking option. SSH had it right the entire time, and the rest of the world is beginning to realize the virtues of directly identifying a host or service by its public key."

Verifying a TLS certificate via the CA hierarchy is better than nothing, but it's not really all that reassuring. Approximately, what it tells you is that there is a chain of certification leading back to one or more root authorities, which for some reason we all try not to think about too much are granted the ultimate authority on the legitimacy web sites. I say "approximately", because fancier TLS verifiers can and do incorporate additional information.

The root authorities are too numerous to really have faith in, and they have been compromised in the past. In general, they and their delegates have little incentive to be careful about what they are certifying, because the entities they certify are also their source of income.

You can get better reliability in key verification if you use information that is based on the interactions of the actual participants, rather than on any form of third-party security databases. Let me describe three examples of that.


Pin the key

For many applications, a remotely installed application needs only communicate with a handful of servers back at a central site you control. In such a case, it works well to pin the public keys of those servers.

The page advocates embedding the public key directly in the application. This is an extremely reliable way of obtaining the correct key. You can embed the key in the app's binary as part of your build system, and then ship the whole bundle over the web, the app store, or however else you are transmitting it to the platform it will run on. Given such a high level of reliability, there is little benefit from pulling in the CA hierarchy.

As linked above, you can implement pinning today. It appears to be tricky manual work, though, rather than something that is built into the framework. As well, you don't get to ignore the CA hierarchy by doing this sort of thing. So long as you use standard SSL libraries, you still have to make sure that your key validates in the standard ways required by SSL.


Associate keys with links

The Y property deserves wider recognition, given how important hyperlinks are in today's world. Put simply, if someone gives you a hyperlink, and you follow that hyperlink, you want to reliably arrive at the same destination that the sender wanted you to get to. That is not what today's URLs give you.

The key to achieving this property is that whenever you transmit a URL, you also transmit a hash of the expected host key. There are many ways to do this, including the ones described at the above hyperlink (assuming you see the same site I am looking at as I write this!). Just to give a very simple example, it could be as simple as using URLs of the following form:


https://hash-ABC123.foo.bar/sub/dir/foo.html

This particular example is interesting for being backward compatible with software that doesn't know what the hash means.

I don't fully know why this problem is left languishing. Part of it is probably that people are resting too easy on the bad assumption that the CA hierarchy has us covered. There's a funny mental bias where if we know nothing about a subject, and we see smart people working on it, the more optimistic of us just assume that it works well. Another part of the answer is that the core protocols of the world-wide web are implemented in many disparate code bases; SSH benefits from having an authoritative version of both the client and the server, especially in its early days.

As things stand, you can implement "YURLs" for your own software, but they won't work as desired in standard web browsers. Even with custom software, they will only work among organizations that use the same YURL scheme. This approach looks workable to me, but it requires growing the protocols and adopting them in the major browsers.


Repeat visits

One last source of useful information is the user's own previous interactions with a given site. Whenever you visit a site, it's worth caching the key for future reference. If you visit the "same" site again but the key has changed, then you should be extremely suspicious. Either the previous site was wrong, or the new one is. You don't know which one is which, but you know something is wrong.

Think how nice it would be if you try to log into your bank account, and the browser said, "This is a site you've never seen before. Proceed?"

You can get that already if you use pet names, which have been implemented as an experimental browser extension. It would be great if web browsers incorporated functionality like this, for example turning the URL bar and browser frame yellow if they see a a site is a new one based on its certificate. Each browser can add this sort of functionality independently, as an issue of quality of implementation.

In your own software, you can implement key memory using the same techniques as for key pinning, as described above.


Key rotation

Any real cryptography system needs to deal with key revocation and with upgrading to new keys. I have intentionally left out such discussion to keep the discussion simple, but I do believe these things can be worked into the above systems. It's important to have a way to sign an official certificate upgrade, so that browsers can correlate new certificates with old ones during a graceful phase-in period. It's also important to have some kind of channel for revoking a certificate, in the case that one has been compromised.

For web applications and for mobile phone applications, you can implement key rotation by forcing the application to upgrade itself. Include the new keys in the newly upgraded version.

Posted in Blogroll

Is this the right server?

It's nice to see someone else reach the following conclusion:

"For those familiar with SSH, you should realize that public key pinning is nearly identical to SSH's StrictHostKeyChecking option. SSH had it right the entire time, and the rest of the world is beginning to realize the virtues of directly identifying a host or service by its public key."

Verifying a TLS certificate via the CA hierarchy is better than nothing, but it's not really all that reassuring. Approximately, what it tells you is that there is a chain of certification leading back to one or more root authorities, which for some reason we all try not to think about too much are granted the ultimate authority on the legitimacy web sites. I say "approximately", because fancier TLS verifiers can and do incorporate additional information.

The root authorities are too numerous to really have faith in, and they have been compromised in the past. In general, they and their delegates have little incentive to be careful about what they are certifying, because the entities they certify are also their source of income.

You can get better reliability in key verification if you use information that is based on the interactions of the actual participants, rather than on any form of third-party security databases. Let me describe three examples of that.


Pin the key

For many applications, a remotely installed application needs only communicate with a handful of servers back at a central site you control. In such a case, it works well to pin the public keys of those servers.

The page advocates embedding the public key directly in the application. This is an extremely reliable way of obtaining the correct key. You can embed the key in the app's binary as part of your build system, and then ship the whole bundle over the web, the app store, or however else you are transmitting it to the platform it will run on. Given such a high level of reliability, there is little benefit from pulling in the CA hierarchy.

As linked above, you can implement pinning today. It appears to be tricky manual work, though, rather than something that is built into the framework. As well, you don't get to ignore the CA hierarchy by doing this sort of thing. So long as you use standard SSL libraries, you still have to make sure that your key validates in the standard ways required by SSL.


Associate keys with links

The Y property deserves wider recognition, given how important hyperlinks are in today's world. Put simply, if someone gives you a hyperlink, and you follow that hyperlink, you want to reliably arrive at the same destination that the sender wanted you to get to. That is not what today's URLs give you.

The key to achieving this property is that whenever you transmit a URL, you also transmit a hash of the expected host key. There are many ways to do this, including the ones described at the above hyperlink (assuming you see the same site I am looking at as I write this!). Just to give a very simple example, it could be as simple as using URLs of the following form:


https://hash-ABC123.foo.bar/sub/dir/foo.html

This particular example is interesting for being backward compatible with software that doesn't know what the hash means.

I don't fully know why this problem is left languishing. Part of it is probably that people are resting too easy on the bad assumption that the CA hierarchy has us covered. There's a funny mental bias where if we know nothing about a subject, and we see smart people working on it, the more optimistic of us just assume that it works well. Another part of the answer is that the core protocols of the world-wide web are implemented in many disparate code bases; SSH benefits from having an authoritative version of both the client and the server, especially in its early days.

As things stand, you can implement "YURLs" for your own software, but they won't work as desired in standard web browsers. Even with custom software, they will only work among organizations that use the same YURL scheme. This approach looks workable to me, but it requires growing the protocols and adopting them in the major browsers.


Repeat visits

One last source of useful information is the user's own previous interactions with a given site. Whenever you visit a site, it's worth caching the key for future reference. If you visit the "same" site again but the key has changed, then you should be extremely suspicious. Either the previous site was wrong, or the new one is. You don't know which one is which, but you know something is wrong.

Think how nice it would be if you try to log into your bank account, and the browser said, "This is a site you've never seen before. Proceed?"

You can get that already if you use pet names, which have been implemented as an experimental browser extension. It would be great if web browsers incorporated functionality like this, for example turning the URL bar and browser frame yellow if they see a a site is a new one based on its certificate. Each browser can add this sort of functionality independently, as an issue of quality of implementation.

In your own software, you can implement key memory using the same techniques as for key pinning, as described above.


Key rotation

Any real cryptography system needs to deal with key revocation and with upgrading to new keys. I have intentionally left out such discussion to keep the discussion simple, but I do believe these things can be worked into the above systems. It's important to have a way to sign an official certificate upgrade, so that browsers can correlate new certificates with old ones during a graceful phase-in period. It's also important to have some kind of channel for revoking a certificate, in the case that one has been compromised.

For web applications and for mobile phone applications, you can implement key rotation by forcing the application to upgrade itself. Include the new keys in the newly upgraded version.

Posted in Blogroll

AJUG Tweets

Follow @atlantajug on twitter.

AJUG Blog

AJUG Meetup

Introduction to developing for Android Wear

Dec 16 2014

Android Wear is Google’s entry into the consumer wearable
electronics market. Wear is based on Android KitKat and designed for displaying timely interactive information from apps running on Android as well as running specific Android Wear apps. This talk will cover the development ecosystem, development and UI design patterns, and application development.

Location:


Holiday Inn Atlanta-Perimeter/Dunwoody

4386 Chamblee Dunwoody Road,
Atlanta, GA (map)