Unexpected Changes Are Common For Most IT Teams

Nginx Raided

When is it time to consider your next infrastructure change? Sometimes, you have a chance to plan for such changes. Other times, the imperative of such a change may be thrust upon you. While never welcome, unexpected changes are a normal part of the IT experience. But how many of us have had to deal with intellectual property issues that resulted in a police raid?

Last month, we learned that Russian authorities raided the Moscow offices of Nginx. Apparently, another Russian company (i.e., the Rambler Group) claimed that the intellectual property behind nginix belongs to them. So while hundreds of thousands of sites have been using nginx under the apparent misconception that nginx was open source, the Russian police have played a trump card on all of us. Whether the code belongs to Rambler or to Nginx/F5 is unclear. But what is known is altogether clear and inescapable: the long-term future of nginx is now in jeopardy.

The Search For Alternatives

Whenever I’m confronted with any kind of unexpected change, I start to perform my normal analysis: identification of the problem, validation of the inherency / causality of the problem, and an assessment of all of the alternatives. In this case, the political instability of another country has resulted in a dramatically increased risk to the core infrastructure of many of our clients. The long-term consequences of these risks could be dramatic. Fortunately, a number of alternatives are available.

First, you could always stand pat and wait to see what happens. After all, it costs little (or nothing) to keep the code running. And this raid may just be a tempest in a tea cup. The code is still available. And people could certainly fork the code and extend the current code based upon an established baseline. Unfortunately, this alternative probably won’t address the core problem: IP litigation can take a long time to determine final outcomes. And international litigation can take even longer. So the probability that the current code base (and any derivatives) could be in sustained jeopardy is relatively high. And while people are fighting over ownership, very few new developers will stake their reputation on an shaky foundation. So the safe bet is that the nginx code base will remain static for the foreseeable future.

Second, you could build your own solution. You and your team could take it upon yourselves to construct a purpose-built reverse proxy. Such an effort would result in a solution that meets all of your organization’s needs. Of course, it can be an unnecessarily expensive venture that might (or might not) deliver a solution when you need one. And if you need a solution right now, then building your own solution is probably out of the question.

Nevertheless, you could always speed up the process by hiring an “expert” to code a solution to your specific needs. Again, this will take time. Realistically, building a custom solution is only necessary if you want to maintain a competitive advantage over other people and organizations. So if you need something that is generic and that already exists in the market, then building it makes little (or no) sense.

Third, you could assay the field and determine if an alternative already exists. And in the case of reverse proxies, there are several alternatives that you could consider. And the most notable of these alternative is the traefik (pronounce traffic) reverse proxy.

Like nginx, traefik can (and usually is) implemented as a micro-service. It is available on GitHub and it can be downloaded (and run) from Docker Hub (https://hub.docker.com). We’ve been eyeing traefik for quite some time now. It has been gaining some serious traction both for personal use and for commercial uses. Consequently, traefik has been in our roadmap as a possible future path.

What We’re Building

Once the news broke concerning the raid on the Moscow offices of Nginx, we decided to build some prototypes using traefik. Like many other groups, we were using nginx. And like many other groups, we wanted to get ahead of the wave and start our own migration to traefik. So over the past few days, we’ve worked with the code and put together a few prototypes for its use.

Our first prototypical implementation is of a home entertainment complex. We mashed together Plex, LetsEncrypt, MariaDB, and a few other containers to build a nifty little home entertainment complex. We build a variation of this using Jellyfin – if you don’t want to support the closed source Plex code base.

While that prototype was fun, we only learned so much from its assembly. So we decided to build a Nextcloud-based document repository. This document server uses traefik, nextcloud, postgresql, redis, and a bitwarden_rs instance. The result is something that we are labeling the LoboStrategies Document repository (i.e., the LSD repo). And yes, we do think that it is trippy.

Bottom Line

Change is fun. And when you are planning the changes, you can mix vision and fun into something marvelous. But sometimes, you are forced to respond to unexpected changes. In our case, we knew that a micro-services application router was needed. And we always knew that traefik could very well be the future code base for some of our products/designs. But when Rambler Group (and the Moscow police) threatened nginx, we were already a few steps ahead. So we simply accelerated some of our plans.

The key takeaway for us was that we had already put together a strategy. So we simply needed to build the tactical plan that implemented our vision. Because we had a long-range strategy, we were able to be far more nimble when the storm clouds came upon us.

Wire-to-Wire Technology Adoption Isn’t The Only Option

The winner surges at the right time.
The Winner Surges At The Right Time

The annual “Run For The Roses” horse race has been held since 1875. In that time, there have been only 22 wire-to-wire race leaders/winners. Indeed, simple statistics favor the jockey and horse who can seize the lead at the right moment. For the strongest horses, that may be the start. But for most horses, the jockey will launch his steed forward when it best suits the horse and its racing profile. This simple (and complicated) approach is also true for technology adoption.

Docker And Early Technology Adoption

Five years ago, Docker exploded onto the IT scene. Originally, Docker was being adopted exclusively by tech savvy companies. And some of these early adopters have taken keen advantage of their foresight. But like horses that leap too soon, many companies have already flashed into existence – and then been extinguished by an inability to deliver on their promised value.

Docker adoption has moved from large enterprises to the boutique service industry.

Now that Docker is over five years old, how many companies are adopting it? According to some surveys, Docker use in the marketplace is substantially over 25%. And I would be willing to bet that if you include businesses playing with Docker, then the number is probably more than 40%. But when you consider that five years is a normal budget/planning horizon, then you must expect that this number will do nothing but increase in the next few years. One thing is certain, the majority of applications in use within businesses are not yet containerized.

So there is still time to take advantage of Docker (and containers). If you haven’t yet jumped on board, then it’s time to get into the water. And if you are already invested in containers, then it’s time to double-down and accelerate your investment. [By the way, this is the “stay competitive” strategy. The only way to truly leap ahead is to use Docker (and other containers) in unique and innovative ways.]

Technology Adoption At Home

Adoption of containers at home is still nascent. Yes, there have been a few notable exceptions. Specifically, one of the very best uses of containers is the Hass.io infrastructure that can be used to host Home Assistant on a Raspberry Pi. Now that the new Raspberry Pi 4 is generally available, it is incredibly simple (and adorably cheap) to learn about – and economically exploit – containers at home.

My Personal Experiences
Containers can and should be used at home.

I’ve been using Docker at home for over a year. And now that I’ve switched all of my traditional computing platforms from Windows to Linux, I’m now using Docker (and other containers) for nearly all of my personal and professional platforms. And this past week, I finally cut over to Docker for all of my web-based systems. My new Docker infrastructure includes numerous images (and containers). I have a few containers for data collection (e.g., glances, portainer). I have moved my personal entertainment systems to Docker containers (e.g. plex, tautulli). And I’ve put some infrastructure in place for future containers (e.g., traefik, watchtower). And in the upcoming months, I’ll be moving my entire TICK stack into Docker containers.

Bottom Line

If you haven’t started to exploit containers, then it’s not too late. Your employer is probably using containers already. But they will be moving even more swiftly towards widespread adoption. Therefore, it’s not too late to jump onto the commercial bandwagon. And if you want to do something really new and innovative, I’d suggest using containers at home. You’ll learn a lot. And you may find a niche where you could build a highly scalable appliance that exploits highly containerized building blocks.

Privacy 0.8 – My Never-ending Privacy Story

This Is The Song That Never Ends
This Is The Song That Never Ends

Privacy protection is not a state of being; it is not a quantum state that needs to be achieved. It is a mindset. It is a process. And that process is never-ending. Like the movie from the eighties, the never-ending privacy story features an inquisitive yet fearful child. [Yes, I’m casting each of us in the that role.] This child must assemble the forces of goodness to fight the forces of evil. [Yes, in this example, I’m casting the government and corporations in the role of evil doers. But bear with me. This is just story-telling.] The story will come to an end when the forces of evil and darkness are finally vanquished by the forces of goodness and light.

It’s too bad that life is not so simple.

My Never-ending Privacy Battle Begins

There is a tremendous battle going on. Selfish forces are seeking to strip us of our privacy while they sell us useless trinkets that we don’t need. There are a few people who truly know what is going on. But most folks only laugh whenever someone talks about “the great Nothing”. And then they see the clouds rolling in. Is it too late for them? Let’s hope not – because ‘they’ are us.

My privacy emphasis began a very long time ago. In fact, I’ve always been part of the security (and privacy) business. But my professional focus began with my first post-collegiate job. After graduation, I worked for the USAF on the Joint Cruise Missile program. My role was meager. In fact, I was doing budget spreadsheets using both Lotus 1-2-3 and the SAS FS-Calc program. A few years later, I remember when the first MIT PGP key server went online. But my current skirmishes with the forces of darkness started a few years ago. And last year, I got extremely serious about improving my privacy posture.

My gaze returned to privacy matters when I realized that my involvement on social media had invalidated any claims I could make about my privacy, I decided to return my gaze to the 800-pound gorilla in the room.

My Never-ending Privacy Battle Restarts

Since then, I’ve deleted almost all of my social media accounts. Gone are Facebook, Twitter, Instagram, Foursquare, and a laundry list of other platforms. I’ve deleted (or disabled) as many Google apps as I can from my Android phone (including Google Maps). I’ve started my new email service – though the long process of deleting my GMail accounts will not end for a few months.

At the same time, I am routinely using a VPN. And as I’ve noted before, I decided to use NordVPN. I have switched away from Chrome and I’m using Firefox exclusively. I’ve also settled upon the key extensions that I am using. And at this moment, I am using the Tor browser about half of the time that I’m online. Finally, I’ve begun the process of compartmentalizing my online activities. My first efforts were to use containers within Firefox. I then started to use application containers (like Docker) for a few of my key infrastructure elements. And recently I’ve started to use virtual guests as a means of limiting my online exposure.

Never-ending Progress

But none of this should be considered news. I’ve written about this in the past. Nevertheless, I’ve made some significant progress towards my annual privacy goals. In particular, I am continuing my move away from Windows and towards open source tools/platforms. In fact, this post will be the first time that I am publicly posting to my site from a virtual client. In fact, I am using a Linux guest for this post.

For some folks, this will be nothing terribly new. But for me, it marks a new high-water mark towards Windows elimination. As of yesterday, I access my email from Linux – not Windows. And I’m blogging on Linux – not Windows. I’ve hosted my Plex server on Linux – not Windows. So I think that I can be off of Windows by the end of 2Q19. And I will couple this with being off GMail by 4Q19.

Bottom Line

I see my goal on the visible horizon. I will meet my 2019 objectives. And if I’m lucky, I may even exceed them by finishing earlier than I originally expected. So what is the reward at the end of these goals? That’s simple. I get to set a new series of goals regarding my privacy.

At the beginning of this article, I said, “The story will come to an end when the forces of evil and darkness are finally vanquished by the forces of goodness and light.” But the truth is that the story will never end. There will always be individuals and groups who want to invade your privacy to advance their own personal (or collective) advantage. And the only way to combat this will be a never-ending privacy battle.

Reducing Threat Surface – Windows Minimization

Breaking the Cycle of Addiction
Let Go of the Past

Last year, my household quit cable TV. The transition wasn’t without its hiccups. But leaving cable has had some great benefits. First, we are paying less money per month. Second, we are watching less TV per month. Third, I have learned a whole lot of things about streaming technologies and about over-the-air (OTA) TV options. Last year was also the year that I put a home automation program into effect. But both of these initiatives were done in 2018. Now I’ve decided that security and Windows minimization will be the key household technology initiatives for 2019.

How Big Is Your Threat Surface?

What is “Windows minimization”? That is simple. “Windows minimization” is the intentional reduction of Windows instances within your organization. Microsoft Windows used to be the platform for innovation and commercialization. Now it is the platform for running legacy systems. Like mainframes and mini-computers before them, Windows is no longer the “go to” platform for new development. C# and .Net are no longer the environment for new applications. And SQL server never was the “go to” platform for most databases. And if you look at the total number of shipped operating systems, it is clear that Android and IOS have clearly become the only significant operating systems on the mobile platform.

Nevertheless, Microsoft products remain the most vulnerable operating system products (based upon the total number of published CVE alerts). Adobe remains the most vulnerable “application” product family. But these numbers only reflect the total number of “announced” vulnerabilities. They don’t take the total number of deployed or exploited systems into account. Based upon deployed instances, Android and iOS remain the most exploited platforms.

Microsoft’s vulnerable status isn’t because their products are inherently less safe. To be candid, all networked computing platforms are unsafe. But given the previous predominance of Windows, Microsoft technologies were the obvious target for most malware developers.

Of course, Windows dominance is no longer the case. Most people do the majority of their casual computing on their phones – which use either Linux (Android) or Unix (iOS). And while Microsoft’s Azure platform is a fine web/cloud platform, most cloud services use Linux and/or cloud services like OpenStack or AWS. So the demand for Windows is declining while the security of all other platforms is rapidly improving.

The Real Reason For Migrating

It is possible to harden your Windows systems. And it is possible to fail to harden your Linux systems. However, it is not possible to easily port a product from one OS to another – unless the software vendor did that for you already. In most cases, if the product you want isn’t on the platform that you use, then you either need to switch your operating platform or you need to convince your software supplier to support your platform.

Heading To The Tipping Point

It is for this reason that I have undertaken this Windows minimization project. New products are emerging every day. Most of them are not on Windows. They are on alternative platforms. Every day, I find a new widget that won’t run on Windows. Of course, I can always run a different operating system on a Windows-host.  But once the majority of my applications run on Linux, then it will make more sense to run a Linux-hosted vitualization platform and host a Windows guest system for the legacy apps.

And I am rapidly nearing that point. My Home Assistant runs on a Raspberry Pi. It has eleven application containers running within Docker (on HassOS). My DNS system runs on a Raspberry Pi. My OpenVPN system is hosted on a Pi.

Legacy Anchors

But a large number of legacy components remain on Windows. Cindy and I use Microsoft Office for general documents – though PDF documents from LibreOffice are starting to increase their share of total documents created. My podcasting platform (for my as yet unlaunched podcast) runs on Windows. And my Plex Media Server (PMS) runs on Windows.

Fortunately, PMS runs on Linux. So I built am Ubuntu 18.10 system to run on VirtualBox. And just as expected, it works flawlessly. Yes, I had to figure a few things out along the way – like using the right CIFS file system to access my NAS. But once I figured these minor tweaks out, I loaded all of my movies onto the new Plex server. I fully expect that once I transition my remaining apps, I’ll turn my Windows Server into an Ubuntu 18.04 LTS server.

Final Takeaways

I have taken my first steps. I’ve proven that Plex will run on Linux. I know that I can convert mobile print services from Windows to Linux. And I can certainly run miscellaneous apps (like TurboTax) on a Windows guest running on Linux. But I want to be sure before I convert my Windows server to Linux. So I will need to complete a software usage survey and build my data migration plan

I wonder how long it will be before I flip the switch – once and for all.

2019 Resolution #2: Blocking Online Trackers

The Myth of Online Privacy
The Myth of Online Privacy
Background

Welcome to the New Year. This year could be a banner year in the fight to ensure our online privacy. Before now, the tools of surveillance have overwhelmed the tools of privacy. And the perceived need for new Internet content has outweighed the real difficulty of protecting your online privacy. For years, privacy advocates (including myself) have chanted the mantra of exploiting public key encryption. We have told people to use Tor or a commercial VPN. And we have told people to start using two-factor authentication. But we have downplayed the importance of blocking online trackers. Yes, security and privacy advocates did this for themselves. But most did not routinely recommend this as a first step in protecting the privacy of our clients.

But the times are changing.

Last year (2018) was a pivotal time in the struggle between surveillance and privacy. The constant reporting of online hacks has risen to a deafening roar. And worse still, we saw the shepherds of our ‘trusted platforms’ go under the microscope. Whether it was Sundar Pichai of Google or Mark Zuckerberg of Facebook, we have seen tech leaders (and their technologies) revealed as base – and ultimately self-serving. Until last year, few of us realized that if we don’t pay for a service, then we are the product that the service owners are selling. But our eyes have now been pried open.

Encryption Is Necessary

Security professionals were right to trumpet the need for encryption. Whether you are sending an email to your grandmother or inquiring about the financial assets that you’ve placed into a banker’s hands, it is not safe to send anything in clear text. Think of it this way. Would you put your tax filing on a postcard so that the mail man – and every person and camera between you and the IRS – could see your financial details? Of course you wouldn’t. You’d seal it in an envelope. You might even hand deliver it to an IRS office. Or more recently, you might send your return electronically – with security protections in place to protect key details of your financial details.

But these kinds of protections are only partial steps. Yes, your information is secure from when it leaves your hands to when it enters the hands of the intended recipient. But what happens when the recipient gets your package of information?

Encryption Is Not Enough

Do the recipients just have your ‘package’ of data or do they have more? As all of us have learned, they most certainly have far more information. Yes, our ISP (i.e., the mail man) has no idea about the message. But what happens when the recipient at the other end of the pipe gets your envelope? They see the postmarks. They see the address. But they could also lift fingerprints from the envelope. And they can use this data. At the same time, by revealing your identity, you have provided the recipient with critical data that could be used to profile you, your friends and family, and even your purchasing habits.

So your safety hinges upon whether you trust the recipients to not disclose key personal information. But here’s the rub. You’ve made a contract with the recipient whereby they can use any and all of your personally identifiable information (PII) for any purpose that they choose. And as we have learned, many companies use this information in hideous way.

Resist Becoming The Product

This will be hard for many people to hear: If you’re not paying for a service, then you shouldn’t be surprised when the service provider monetizes any and all information that you have willingly shared with them. GMail is a great service – paid for with you, your metadata, and every bit of content that you put into your messages. Facebook is phenomenal. But don’t be surprised when MarkeyZ sells you out.

Because of the lessons that I’ve learned in 2018, I’m starting a renewed push towards improving my privacy. Up until now, I’ve focused on security. I’ve used a commercial VPN and/or Tor to protect myself from ISP eavesdropping. I’ve built VPN servers for all of my clients. I’ve implemented two-factor authentication for as many of my logons as my service providers will support.

Crank It Up To Eleven

And now I have to step up my game.

  1. I must delete all of my social media accounts. That will be fairly simple as I’ve already gotten rid of Facebook/Instagram, Pinterest, and Twitter. Just a few more to go. I’m still debating about LinkedIn. I do pay for premium services. But I also know that Microsoft is selling my identity. For the moment, I will keep LinkedIn as it is my best vehicle for professional interactions.
  2. I may add a Facebook account for the business. Since many customers are on Facebook, I don’t want to abandon potential customers. But I will strictly separate my public business identity/presence from my personal identity/presence.
  3. I need to get off of Gmail. This one will be tougher than the first item. Most of my contacts know me from my GMail address (which I’ve used for over fifteen years). But I’ve already created two new email addresses (one for the business and one on ProtonMail). My current plan is to move completely off of GMail by the end of 1Q19.
  4. I am going to exclusively use secure browsing for almost everything. I’ve used ad-blockers for both the browser and for DNS. And I’ve used specific Firefox extensions for almost all other browsing activities that I have done. I will now try and exclusively use the Tor Browser on a virtual machine (i.e., Whonix) and implement NoScript wherever I use that browser. Let’s hope that these things will really reduce my vulnerability on the Internet. I suspect that I will find some sites that just won’t work with Tor (or with NoScript). When I find such sites, I’ll have to intentionally choose whether to use the site unprotected or set up a sandbox (and virtual identities) whenever I use these sites. Either way, I will run such sites from a VM – just to limit my exposure.
  5. I will block online trackers by default. Firefox helps. NoScript also helps. But I will start routinely using Privacy Badger and uMatrix as well.
Bottom Line

In the final analysis, I am sure that there are some compromises that I will need to make. Changing my posture from trust to distrust and blocking all online trackers will be the hardest – and most rewarding – step that I can make towards protecting my privacy.

The Ascension of the Ethical Hacker

Hacker: The New Security Professional

Over the past year, I have seen thousands of Internet ads about obtaining an ‘ethical hacker’ certification. These ads (and the associated certifications) have been around for years. But I think that the notoriety of “Mr. Robot” has added sexiness (and legitimacy) to the title “Certified Ethical Hacker”. But what is an ‘ethical hacker’?

According to Dictionary.com, an ethical hacker is, “…a person who hacks into a computer network in order to test or evaluate its security, rather than with malicious or criminal intent.” Wikipedia has a much more comprehensive definition. But every definition revolves around taking an illegitimate activity (i.e., computer hacking) and making it honorable.

The History of Hacking

This tendency to lionize hacking began when Matthew Broderick fought against the WOPR in “WarGames”.  And the trend continued in the early nineties with the Robert Redford classic, “Sneakers”. In the late nineties, we saw Keanu Reeves as Neo (in “The Matrix”) and Gene Hackman as Edward Lyle (in “Enemy of the State”). But the hacker hero worship has been around for as long as there have been computers to hate (e.g., “Colossus: The Forbin Project”).

But as computer hacking has become routine (e.g., see “The Greatest Computer Hacks” on Lifewire), everyday Americans are now aware of their status as “targets” of attacks.  Consequently, most corporations are accelerating their investment in security – and in vulnerability assessments conducted by “Certified Ethical Hackers”.

So You Wanna Be A White Hat? Start Small

Increased corporate attacks result in increased corporate spending. And increased spending means that there is an ‘opportunity’ for industrious technicians. For most individuals, the cost of getting ‘certified’ (for CISSP and/or CEH) is out of reach. At a corporate scale, ~$15K for classes and a test is not very much to pay. But for gig workers, it is quite an investment. So can you start learning on your own?

Yes, you can start learning on your own. In fact, there are lots of ways to start learning. You could buy books. Or you could start learning by doing. This past weekend, I decided to up my game. I’ve done security architecture, design, and development for a number of years. But my focus has always been on intruder detection and threat mitigation.  It was obvious that I needed to learn a whole lot more about vulnerability assessment. But where would I start?

My starting point was to spin up a number of new virtual systems where I could test attacks and defenses. In the past, I would just walk into the lab and fire up some virtual machines on some of the lab systems. But now that I am flying solo, I’ve decided to do this the same way that hackers might do it: by using whatever I had at hand.

The first step was to set up VirtualBox on one of my systems/servers. Since I’ve done that before, it was no problem setting things up again. My only problem was that I did not have VT-x enabled on my motherboard. Once I did that, things started to move rather quickly.

Then I had to start downloading (and building) appropriate OS images. My first test platform was Tails. Tails is a privacy centered system that can be booted from a USB stick. My second platform was a Kali Linux instance. Kali is a fantastic pen testing platform – principally because it includes a Metasploit infrastructure. I even decided to start building some attack targets. Right now, I have a VM for Raspbian (Linux on the Raspberry Pi), a VM for Debian Linux, one for Red Hat Linux, and a few for Windows targets. Now that the infrastructure is built, I can begin the learning process.

Bottom Line

If you want to be an ethical hacker (or understand the methods of any hacker), then you can start without going to a class. Yes, it will be more difficult to learn by yourself. But it will be far less expensive – and far more memorable. Remember, you can always take the class later.

Time Series Data: A Recurring Theme

When I graduated from college (over three-and-a-half decades ago), I had an eclectic mix of skills. I obtained degrees in economics and political sciences. But I spent a lot of my personal time working on building computers and writing computer programs. I also spent a lot of my class time learning about econometrics – that is, the representation of economic systems in mathematical/statistical models. While studying, I began using SPSS to analyze time series data.

Commercial Tools (and IP) Ruled

When I started my first job, I used SPSS for all sorts of statistical studies. In particular, I built financial models for the United States Air Force so that they could forecast future spending on the Joint Cruise Missile program. But within a few years, the SPSS tool was superseded by a new program out of Cary, NC. That program was the Statistical Analysis System (a.k.a., SAS). And I have used SAS ever since.

At first, I used the tool as a very fancy summation engine and report generator. It even served as the linchpin of a test-bed generation system that I built for a major telecommunications company. In the nineties, I began using SAS for time series data analysis. In particular, we piped CPU statistics (in the form of RMF and SMF data) into SAS-based performance tools.

Open Source Tools Enter The Fray

As the years progressed, my roles changed and my use of SAS (and time series data) began to wane. But in the past decade, I started using time series data analysis tools to once again conduct capacity and performance studies. At a major financial institution, we collected system data from both Windows and Unix systems throughout the company. And we used this data to build forecasts for future infrastructure acquisitions.

Yes, we continued to use SAS. But we also began to use tools like R. R became a pivotal tool in most universities. But many businesses still used SAS for their “big iron” systems. At the same time, many companies moved from SAS to Microsoft-based tools (including MS Excel and its pivot tables).

TICK Seizes Time Series Data Crown

Over the past few years, “stack-oriented” tools have emerged as the next “new thing” in data centers. [Note: Stacks are like clouds; they are everywhere and they are impossible to define simply.] Most corporations have someone’s “stack” running their business – whether it be Amazon AWS, Microsoft Azure, Docker, Kubernetes, or a plethora of other tools.  And most commercial ventures are choosing hybrid stacks (with commercial and open source components).

And the migration towards “stacks” for execution is encouraging the migration to “stacks” for analysis. Indeed, the entire shift towards NoSQL databases is being paired with a shift towards time series databases.  Today, one of the hottest “stacks” for analysis is TICK (i.e., Telegraf, InfluxDB, Chronograf, and Kapacitor).

TICK Stack @ Home

Like most projects, I stumbled onto the TICK stack. I use Home Assistant to manage a plethora of IoT devices. And as the device portfolio has grown, my need for monitoring these devices has also increased. A few months ago, I noted that an InfluxDB add-on could be found for HassIO.  So I installed the add-on and started collecting information about my Home Assistant installation.

Unfortunately, the data that I collected began to exceed my capacity to store the data on the SD card that I had in my Raspberry Pi. So after running the system for a few weeks, I decided to turn the data collection off – at least until I solved some architectural problems. And so the TICK stack went on the back burner.

I had solved a bunch of other IoT issues last week. So this week, I decided to focus on getting the TICK stack operational within the office. After careful consideration, I concluded that the test cases for monitoring would be a Windows/Intel server, a Windows laptop, my Pi-hole server, and my Home Assistant instance.

Since I was working with my existing asset inventory, I decided to host the key services (or daemons) on my Windows server. So I installed Chronograf, InfluxDB, and Kapicitor onto that system. Since there was no native support for a Windows service install, I used the Non-Sucking Service Manager (NSSM) to create the relevant Windows services. At the same time, I installed Telegraf onto a variety of desktops, laptops, and Linux systems. After only a few hiccups, I finally got everything deployed and functioning automatically. Phew!

Bottom Line

I implemented the TICK components onto a large number of systems. And I am now collecting all sorts of time series data from across the network. As I think about what I’ve done in the past few days, I realize just how important it is to stand on the shoulders of others. A few decades ago, I would have paid thousands of dollars to collect and analyze this data. Today, I can do it with only a minimal investment of time and materials. And given these minimal costs, it will be possible to use these findings for almost every DevOps engagement that arises.

Consolidating Micro Data Centers

Cloud-based-Microservices
Cloud-based Microservices

Cloud computing is an information technology (IT) paradigm that enables ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet.”

Using this definition, the key elements of cloud computing are as follows:

  • Network access
  • Shared groups of systems and services
  • Rapid (and dynamic) provisioning
  • Minimal  management

Nothing in this definition speaks to the size of the “data center” which houses these systems and services. Most of us probably think of Amazon, or Google, or Microsoft when we think of cloud services. But it need not be a multi-million dollar investment for it to be a part of cloud computing.

Data Center Consolidation

This past weekend, we closed one of our data centers. Specifically, we shut down the facility in Waldo, Missouri. This “data center” was a collection of systems and services. It hosted the web site, the file servers, and one of our DNS servers. But these weren’t housed in a vast data center. The services were located in a room within a  residential property. For the past four months, we ran this site remotely. And this past weekend, we consolidated all the Waldo services at our Elgin facility.

Like most moves, there was a plan. And the plan was fluid enough to deal with the challenges that arose. And as happens with most consolidations, some spare gear became available. We reclaimed the DNS server (a Raspberry Pi). And we re-purposed the premise router as a test platform at our Elgin site.

Since this site was both business and residential, we had to re-architect the storage infrastructure to accommodate multiple (and dissimilar) use cases. We also moved key data from local storage on the servers to the consolidated storage farm. 

Once cleared out, we returned the property back to the landlord.

Service Consolidation

As noted, we consolidated all of the file servers into a single storage farm. But we did need to migrate some of the data from the servers and onto the new storage. Once we migrated the data, we consolidated the streaming servers. The overall experience for our streaming customers will become much simpler.

Hardware Re-use

With the release of one of our routers, we are now able to put a test bed together. That test bed will run DD-WRT software. The process of converting the Netgear infrastructure to DD-WRT was quite tedious. It took four (4) different attempts to reset the old hardware before we could load the new software. This wasn’t anticipated. And it took us beyond the anticipated change window. Fortunately, we kept our customers informed and we were able to amend customer expectations.

Once deployed, the new network build will provide VPN services to all clients. At the same time, we will be turning up DNSSEC across the company. Finally, we will be enabling network-wide QOS and multi-casting. In short, the spare gear has given us the chance to improve our network and our ability to deliver new services.

The Rest of the Story

All of this sounds like a well-oiled plan. And it did go without any real incidents. But the scale of the effort was much smaller than you might expect. The site in Waldo was a room in a rental. The servers were a desktop, a couple of laptops, a NAS box, a cable modem, a Netgear R8000 X6 router, a Raspberry Pi, and a variety of streaming devices (like a TV, a few Chromecast devices, and the mobile phones associated with the users (i.e., members of my family.

So why would I represent this as a “data center” move? That is easy: when you move connected devices across a network (or across the country), you still have to plan for the move. More importantly, cloud services (either at the edge or within the confines of a traditional data center) must be manged as if the customer depends upon the services. And to be fair, sometimes  our families are even more stringent about loss-of-service issues than are our customers.
 
 
 

Chick-Fil-A…Runs Kubernetes…At Every Store


How many of you thought that Chick-fil-A would have a tech blog? And how many of you thought that they would be clustering edge nodes at every store? When I read this article, I was surprised – and quite excited.

The basic use case is that every Chick-fil-A store needs to run certain basic management apps. These apps run at the edge but are connected to the central office. These apps include network and IT management stuff. But they also include some of the “mundane” back-office apps that keep a company going.

Routine stuff, right? But in the Chick-fil-A case, these apps/systems need to be remote and resilient. The hardware must be installed and maintained by non-technical (or semi-technical) employees (and/or contractors). If a node fails, the recovery must be as simple as unplugging the failed device and plugging in a replacement device. Similarly, the node enrollment, software distribution, and system recovery capabilities have to be automated – and flawless.

Here is where containers and Kubernetes enters the picture.

The secret to Chick-Fil-A’s success is the recipe that they use to assemble all of the parts into a yummy solution. The servers (i.e., Intel NUC devices) power up, download the relevant software, and join the local cluster. The most exciting part of this solution is its dependence upon commodity components and open source software to build a resilient platform for the company’s “secret sauce” (i.e., their proprietary apps).

The next time you go into a Chick-fil-A, remember that they are using leading-edge tech to ensure that you get the sandwich that you so desperately want to eat.

View at Medium.com