Unexpected Changes Are Common For Most IT Teams

Nginx Raided

When is it time to consider your next infrastructure change? Sometimes, you have a chance to plan for such changes. Other times, the imperative of such a change may be thrust upon you. While never welcome, unexpected changes are a normal part of the IT experience. But how many of us have had to deal with intellectual property issues that resulted in a police raid?

Last month, we learned that Russian authorities raided the Moscow offices of Nginx. Apparently, another Russian company (i.e., the Rambler Group) claimed that the intellectual property behind nginix belongs to them. So while hundreds of thousands of sites have been using nginx under the apparent misconception that nginx was open source, the Russian police have played a trump card on all of us. Whether the code belongs to Rambler or to Nginx/F5 is unclear. But what is known is altogether clear and inescapable: the long-term future of nginx is now in jeopardy.

The Search For Alternatives

Whenever I’m confronted with any kind of unexpected change, I start to perform my normal analysis: identification of the problem, validation of the inherency / causality of the problem, and an assessment of all of the alternatives. In this case, the political instability of another country has resulted in a dramatically increased risk to the core infrastructure of many of our clients. The long-term consequences of these risks could be dramatic. Fortunately, a number of alternatives are available.

First, you could always stand pat and wait to see what happens. After all, it costs little (or nothing) to keep the code running. And this raid may just be a tempest in a tea cup. The code is still available. And people could certainly fork the code and extend the current code based upon an established baseline. Unfortunately, this alternative probably won’t address the core problem: IP litigation can take a long time to determine final outcomes. And international litigation can take even longer. So the probability that the current code base (and any derivatives) could be in sustained jeopardy is relatively high. And while people are fighting over ownership, very few new developers will stake their reputation on an shaky foundation. So the safe bet is that the nginx code base will remain static for the foreseeable future.

Second, you could build your own solution. You and your team could take it upon yourselves to construct a purpose-built reverse proxy. Such an effort would result in a solution that meets all of your organization’s needs. Of course, it can be an unnecessarily expensive venture that might (or might not) deliver a solution when you need one. And if you need a solution right now, then building your own solution is probably out of the question.

Nevertheless, you could always speed up the process by hiring an “expert” to code a solution to your specific needs. Again, this will take time. Realistically, building a custom solution is only necessary if you want to maintain a competitive advantage over other people and organizations. So if you need something that is generic and that already exists in the market, then building it makes little (or no) sense.

Third, you could assay the field and determine if an alternative already exists. And in the case of reverse proxies, there are several alternatives that you could consider. And the most notable of these alternative is the traefik (pronounce traffic) reverse proxy.

Like nginx, traefik can (and usually is) implemented as a micro-service. It is available on GitHub and it can be downloaded (and run) from Docker Hub (https://hub.docker.com). We’ve been eyeing traefik for quite some time now. It has been gaining some serious traction both for personal use and for commercial uses. Consequently, traefik has been in our roadmap as a possible future path.

What We’re Building

Once the news broke concerning the raid on the Moscow offices of Nginx, we decided to build some prototypes using traefik. Like many other groups, we were using nginx. And like many other groups, we wanted to get ahead of the wave and start our own migration to traefik. So over the past few days, we’ve worked with the code and put together a few prototypes for its use.

Our first prototypical implementation is of a home entertainment complex. We mashed together Plex, LetsEncrypt, MariaDB, and a few other containers to build a nifty little home entertainment complex. We build a variation of this using Jellyfin – if you don’t want to support the closed source Plex code base.

While that prototype was fun, we only learned so much from its assembly. So we decided to build a Nextcloud-based document repository. This document server uses traefik, nextcloud, postgresql, redis, and a bitwarden_rs instance. The result is something that we are labeling the LoboStrategies Document repository (i.e., the LSD repo). And yes, we do think that it is trippy.

Bottom Line

Change is fun. And when you are planning the changes, you can mix vision and fun into something marvelous. But sometimes, you are forced to respond to unexpected changes. In our case, we knew that a micro-services application router was needed. And we always knew that traefik could very well be the future code base for some of our products/designs. But when Rambler Group (and the Moscow police) threatened nginx, we were already a few steps ahead. So we simply accelerated some of our plans.

The key takeaway for us was that we had already put together a strategy. So we simply needed to build the tactical plan that implemented our vision. Because we had a long-range strategy, we were able to be far more nimble when the storm clouds came upon us.

Wire-to-Wire Technology Adoption Isn’t The Only Option

The winner surges at the right time.
The Winner Surges At The Right Time

The annual “Run For The Roses” horse race has been held since 1875. In that time, there have been only 22 wire-to-wire race leaders/winners. Indeed, simple statistics favor the jockey and horse who can seize the lead at the right moment. For the strongest horses, that may be the start. But for most horses, the jockey will launch his steed forward when it best suits the horse and its racing profile. This simple (and complicated) approach is also true for technology adoption.

Docker And Early Technology Adoption

Five years ago, Docker exploded onto the IT scene. Originally, Docker was being adopted exclusively by tech savvy companies. And some of these early adopters have taken keen advantage of their foresight. But like horses that leap too soon, many companies have already flashed into existence – and then been extinguished by an inability to deliver on their promised value.

Docker adoption has moved from large enterprises to the boutique service industry.

Now that Docker is over five years old, how many companies are adopting it? According to some surveys, Docker use in the marketplace is substantially over 25%. And I would be willing to bet that if you include businesses playing with Docker, then the number is probably more than 40%. But when you consider that five years is a normal budget/planning horizon, then you must expect that this number will do nothing but increase in the next few years. One thing is certain, the majority of applications in use within businesses are not yet containerized.

So there is still time to take advantage of Docker (and containers). If you haven’t yet jumped on board, then it’s time to get into the water. And if you are already invested in containers, then it’s time to double-down and accelerate your investment. [By the way, this is the “stay competitive” strategy. The only way to truly leap ahead is to use Docker (and other containers) in unique and innovative ways.]

Technology Adoption At Home

Adoption of containers at home is still nascent. Yes, there have been a few notable exceptions. Specifically, one of the very best uses of containers is the Hass.io infrastructure that can be used to host Home Assistant on a Raspberry Pi. Now that the new Raspberry Pi 4 is generally available, it is incredibly simple (and adorably cheap) to learn about – and economically exploit – containers at home.

My Personal Experiences
Containers can and should be used at home.

I’ve been using Docker at home for over a year. And now that I’ve switched all of my traditional computing platforms from Windows to Linux, I’m now using Docker (and other containers) for nearly all of my personal and professional platforms. And this past week, I finally cut over to Docker for all of my web-based systems. My new Docker infrastructure includes numerous images (and containers). I have a few containers for data collection (e.g., glances, portainer). I have moved my personal entertainment systems to Docker containers (e.g. plex, tautulli). And I’ve put some infrastructure in place for future containers (e.g., traefik, watchtower). And in the upcoming months, I’ll be moving my entire TICK stack into Docker containers.

Bottom Line

If you haven’t started to exploit containers, then it’s not too late. Your employer is probably using containers already. But they will be moving even more swiftly towards widespread adoption. Therefore, it’s not too late to jump onto the commercial bandwagon. And if you want to do something really new and innovative, I’d suggest using containers at home. You’ll learn a lot. And you may find a niche where you could build a highly scalable appliance that exploits highly containerized building blocks.

Chick-Fil-A…Runs Kubernetes…At Every Store


How many of you thought that Chick-fil-A would have a tech blog? And how many of you thought that they would be clustering edge nodes at every store? When I read this article, I was surprised – and quite excited.

The basic use case is that every Chick-fil-A store needs to run certain basic management apps. These apps run at the edge but are connected to the central office. These apps include network and IT management stuff. But they also include some of the “mundane” back-office apps that keep a company going.

Routine stuff, right? But in the Chick-fil-A case, these apps/systems need to be remote and resilient. The hardware must be installed and maintained by non-technical (or semi-technical) employees (and/or contractors). If a node fails, the recovery must be as simple as unplugging the failed device and plugging in a replacement device. Similarly, the node enrollment, software distribution, and system recovery capabilities have to be automated – and flawless.

Here is where containers and Kubernetes enters the picture.

The secret to Chick-Fil-A’s success is the recipe that they use to assemble all of the parts into a yummy solution. The servers (i.e., Intel NUC devices) power up, download the relevant software, and join the local cluster. The most exciting part of this solution is its dependence upon commodity components and open source software to build a resilient platform for the company’s “secret sauce” (i.e., their proprietary apps).

The next time you go into a Chick-fil-A, remember that they are using leading-edge tech to ensure that you get the sandwich that you so desperately want to eat.

View at Medium.com