Wire-to-Wire Technology Adoption Isn’t The Only Option

The winner surges at the right time.
The Winner Surges At The Right Time

The annual “Run For The Roses” horse race has been held since 1875. In that time, there have been only 22 wire-to-wire race leaders/winners. Indeed, simple statistics favor the jockey and horse who can seize the lead at the right moment. For the strongest horses, that may be the start. But for most horses, the jockey will launch his steed forward when it best suits the horse and its racing profile. This simple (and complicated) approach is also true for technology adoption.

Docker And Early Technology Adoption

Five years ago, Docker exploded onto the IT scene. Originally, Docker was being adopted exclusively by tech savvy companies. And some of these early adopters have taken keen advantage of their foresight. But like horses that leap too soon, many companies have already flashed into existence – and then been extinguished by an inability to deliver on their promised value.

Docker adoption has moved from large enterprises to the boutique service industry.

Now that Docker is over five years old, how many companies are adopting it? According to some surveys, Docker use in the marketplace is substantially over 25%. And I would be willing to bet that if you include businesses playing with Docker, then the number is probably more than 40%. But when you consider that five years is a normal budget/planning horizon, then you must expect that this number will do nothing but increase in the next few years. One thing is certain, the majority of applications in use within businesses are not yet containerized.

So there is still time to take advantage of Docker (and containers). If you haven’t yet jumped on board, then it’s time to get into the water. And if you are already invested in containers, then it’s time to double-down and accelerate your investment. [By the way, this is the “stay competitive” strategy. The only way to truly leap ahead is to use Docker (and other containers) in unique and innovative ways.]

Technology Adoption At Home

Adoption of containers at home is still nascent. Yes, there have been a few notable exceptions. Specifically, one of the very best uses of containers is the Hass.io infrastructure that can be used to host Home Assistant on a Raspberry Pi. Now that the new Raspberry Pi 4 is generally available, it is incredibly simple (and adorably cheap) to learn about – and economically exploit – containers at home.

My Personal Experiences
Containers can and should be used at home.

I’ve been using Docker at home for over a year. And now that I’ve switched all of my traditional computing platforms from Windows to Linux, I’m now using Docker (and other containers) for nearly all of my personal and professional platforms. And this past week, I finally cut over to Docker for all of my web-based systems. My new Docker infrastructure includes numerous images (and containers). I have a few containers for data collection (e.g., glances, portainer). I have moved my personal entertainment systems to Docker containers (e.g. plex, tautulli). And I’ve put some infrastructure in place for future containers (e.g., traefik, watchtower). And in the upcoming months, I’ll be moving my entire TICK stack into Docker containers.

Bottom Line

If you haven’t started to exploit containers, then it’s not too late. Your employer is probably using containers already. But they will be moving even more swiftly towards widespread adoption. Therefore, it’s not too late to jump onto the commercial bandwagon. And if you want to do something really new and innovative, I’d suggest using containers at home. You’ll learn a lot. And you may find a niche where you could build a highly scalable appliance that exploits highly containerized building blocks.

Maintaining Technology Currency (and Relevance)

Grasp the Future
Grasp the Future

A few months ago, I wrote an article about mobile privacy. In that article, I wrote about how every “off-the-shelf” mobile platform MUST be modified in order to ensure some modicum of privacy. I expanded upon this thought when I recently presented to the Fox Valley Computer Professionals. [A version of that presentation can be found over at SlideShare.] One of the most important themes from the presentation actually arose during the obligatory Q&A session. [By the way, the Q&A time is always the most important part of any presentation.] From this Q&A time, I realized that the single most important takeaway was the necessity of maintaining technology currency.

From a security perspective, it is essential to remain current on all elements of your infrastructure. One of the most exploited vectors in any organization is the rampant inattention to software maintenance. It only takes one zero-day exploit to compromise a meticulously maintained system. And for those organizations that do not remain current on their software, they are opening up their systems (and their customers) to external exploitation. A decade ago, PC World highlighted the risks of operating with un-patched systems. While the numbers may have changed since that article, the fundamental lesson is still the same: technology currency is one of the most under-recognized means of hardening your systems.

The Human Factor

But technology currency is not just a matter of ensuring the continuing usability of our technology investments. It is also an important matter for ensuring the sustaining value of the people within our teams. I have been involved in IT for several decades. In that time, I’ve seen many waves of change. In that time, I’ve seen mainframes became Unix Systems. Windows desktops became Windows servers. All applications servers (regardless of their OS) became web servers. And now these same “n-tier” servers have become virtual systems that are now running on “cloud” platforms.

But with each wave of technology that emerged, crested, and then subsided, you will probably find a whole group of technology specialists who are now displaced. Fortunately, most technologists are flexible. So if they didn’t stay working on legacy systems, then they have willingly (or unwillingly) embraced the next technology wave.

Redrawing the Boundaries of Trust

Like many technologists, I have been forced into career acrobatics with each new wave of technology. And I have complicated these transitions by switching between a variety of IT disciplines (e.g., application development, information security, capacity and performance management, configuration and change management, and IT operations). So it was not a surprise when I realized that information privacy changes were driving similar changes – for the industry and for myself.

For almost two decades, I’ve been telling people that they needed to shift to hosted (cloud) platforms. Of course, this shift meant entering into trust relationships with external service providers. But for the last four or five years, my recommendations have begun to change. I still advocate using managed service platforms. But when privacy and competitive advantages are at stake, it may be necessary to redraw the trust boundaries.

A decade ago, everyone trusted Google and Facebook to be good partners. Today, we view both of them (and many others) as self-interested members of an overly complex supply chain. So today, I am recommending that every company (and even most individuals) revisit the trust boundaries that they have with every part of their supply chain.

Moving Personal Fences

We have decided to redraw trust boundaries in dramatic ways. First, we have decided to forego the advantages of partnering with both Facebook and Google. This was simple when it came to Facebook. Yes, not being on Facebook is hard. But it is eminently achievable. To that end, I am celebrating my one year divorce from Mark & Co. But redrawing the boundaries with Google have been much harder.

Getting rid of Google has meant moving to new email services. [Note: This also meant abandoning builtin contact address books and calendaring. It has also meant discontinuing the use of Google Apps. And from a personal level, it has meant some dramatic changes for my mobile computing platform.

Bottom Line: Moving off of the Google cloud has required the construction of an an entirely new cloud platform to replace the capabilities of Google Drive/Cloud.

Nextcloud Replaces Google Cloud

We needed a platform to provide the following functions:

  1. Accessible and extensible cloud storage for both local and remote/mobile users.
  2. An integrated Contact database.
  3. An integrated Calendar database.
  4. An integrated Task database.
  5. A means of supporting WebDAV and CalDAV to access the aforementioned items.

Of course, there is also a whole group of “nice-to-have” features, including:

  • Phone/location tracking,
  • Mobile document scanning (and OCR),
  • Two-factor authentication

After considerable review, we decided to use Nextcloud. It provided all of the mandatory features that we required as well as all of the “nice-to-have” features. We further decided to minimize our security exposure by running this service from within a VPS running onsite (though offsite would have worked as well).

Outcomes

It took several days to secure the hardware, setup the virtual infrastructure, install Nextcloud, and configure it for local and mobile access. Currently, we’re using a Nextcloud virtual “appliance” as the base for our office cloud. From this foundation, we extended the basic appliance to meet capacity and security needs. We also installed ONLY OFFICE as an alternative to both local and cloud-based Microsoft Office products.

At this very moment, we are now decoupling our phones and our systems from the Google cloud infrastructure. And as noted before, we’ve already changed our DNS infrastructure from ISP/Google to our own systems. So we are well on our way to minimize the threat surface associated with Google services.

Of course, there is more work to do. We need to further ruggedize our services to ensure higher availability. But our dependence upon Google has been drastically reduced. And the data that Google collects from us is also reduced. Now we just have to get rid of all of the data that Google has collected from us over the past fifteen (15) years.

Time Series Data: A Recurring Theme

When I graduated from college (over three-and-a-half decades ago), I had an eclectic mix of skills. I obtained degrees in economics and political sciences. But I spent a lot of my personal time working on building computers and writing computer programs. I also spent a lot of my class time learning about econometrics – that is, the representation of economic systems in mathematical/statistical models. While studying, I began using SPSS to analyze time series data.

Commercial Tools (and IP) Ruled

When I started my first job, I used SPSS for all sorts of statistical studies. In particular, I built financial models for the United States Air Force so that they could forecast future spending on the Joint Cruise Missile program. But within a few years, the SPSS tool was superseded by a new program out of Cary, NC. That program was the Statistical Analysis System (a.k.a., SAS). And I have used SAS ever since.

At first, I used the tool as a very fancy summation engine and report generator. It even served as the linchpin of a test-bed generation system that I built for a major telecommunications company. In the nineties, I began using SAS for time series data analysis. In particular, we piped CPU statistics (in the form of RMF and SMF data) into SAS-based performance tools.

Open Source Tools Enter The Fray

As the years progressed, my roles changed and my use of SAS (and time series data) began to wane. But in the past decade, I started using time series data analysis tools to once again conduct capacity and performance studies. At a major financial institution, we collected system data from both Windows and Unix systems throughout the company. And we used this data to build forecasts for future infrastructure acquisitions.

Yes, we continued to use SAS. But we also began to use tools like R. R became a pivotal tool in most universities. But many businesses still used SAS for their “big iron” systems. At the same time, many companies moved from SAS to Microsoft-based tools (including MS Excel and its pivot tables).

TICK Seizes Time Series Data Crown

Over the past few years, “stack-oriented” tools have emerged as the next “new thing” in data centers. [Note: Stacks are like clouds; they are everywhere and they are impossible to define simply.] Most corporations have someone’s “stack” running their business – whether it be Amazon AWS, Microsoft Azure, Docker, Kubernetes, or a plethora of other tools.  And most commercial ventures are choosing hybrid stacks (with commercial and open source components).

And the migration towards “stacks” for execution is encouraging the migration to “stacks” for analysis. Indeed, the entire shift towards NoSQL databases is being paired with a shift towards time series databases.  Today, one of the hottest “stacks” for analysis is TICK (i.e., Telegraf, InfluxDB, Chronograf, and Kapacitor).

TICK Stack @ Home

Like most projects, I stumbled onto the TICK stack. I use Home Assistant to manage a plethora of IoT devices. And as the device portfolio has grown, my need for monitoring these devices has also increased. A few months ago, I noted that an InfluxDB add-on could be found for HassIO.  So I installed the add-on and started collecting information about my Home Assistant installation.

Unfortunately, the data that I collected began to exceed my capacity to store the data on the SD card that I had in my Raspberry Pi. So after running the system for a few weeks, I decided to turn the data collection off – at least until I solved some architectural problems. And so the TICK stack went on the back burner.

I had solved a bunch of other IoT issues last week. So this week, I decided to focus on getting the TICK stack operational within the office. After careful consideration, I concluded that the test cases for monitoring would be a Windows/Intel server, a Windows laptop, my Pi-hole server, and my Home Assistant instance.

Since I was working with my existing asset inventory, I decided to host the key services (or daemons) on my Windows server. So I installed Chronograf, InfluxDB, and Kapicitor onto that system. Since there was no native support for a Windows service install, I used the Non-Sucking Service Manager (NSSM) to create the relevant Windows services. At the same time, I installed Telegraf onto a variety of desktops, laptops, and Linux systems. After only a few hiccups, I finally got everything deployed and functioning automatically. Phew!

Bottom Line

I implemented the TICK components onto a large number of systems. And I am now collecting all sorts of time series data from across the network. As I think about what I’ve done in the past few days, I realize just how important it is to stand on the shoulders of others. A few decades ago, I would have paid thousands of dollars to collect and analyze this data. Today, I can do it with only a minimal investment of time and materials. And given these minimal costs, it will be possible to use these findings for almost every DevOps engagement that arises.

Consolidating Micro Data Centers

Cloud-based-Microservices
Cloud-based Microservices

Cloud computing is an information technology (IT) paradigm that enables ubiquitous access to shared pools of configurable system resources and higher-level services that can be rapidly provisioned with minimal management effort, often over the Internet.”

Using this definition, the key elements of cloud computing are as follows:

  • Network access
  • Shared groups of systems and services
  • Rapid (and dynamic) provisioning
  • Minimal  management

Nothing in this definition speaks to the size of the “data center” which houses these systems and services. Most of us probably think of Amazon, or Google, or Microsoft when we think of cloud services. But it need not be a multi-million dollar investment for it to be a part of cloud computing.

Data Center Consolidation

This past weekend, we closed one of our data centers. Specifically, we shut down the facility in Waldo, Missouri. This “data center” was a collection of systems and services. It hosted the web site, the file servers, and one of our DNS servers. But these weren’t housed in a vast data center. The services were located in a room within a  residential property. For the past four months, we ran this site remotely. And this past weekend, we consolidated all the Waldo services at our Elgin facility.

Like most moves, there was a plan. And the plan was fluid enough to deal with the challenges that arose. And as happens with most consolidations, some spare gear became available. We reclaimed the DNS server (a Raspberry Pi). And we re-purposed the premise router as a test platform at our Elgin site.

Since this site was both business and residential, we had to re-architect the storage infrastructure to accommodate multiple (and dissimilar) use cases. We also moved key data from local storage on the servers to the consolidated storage farm. 

Once cleared out, we returned the property back to the landlord.

Service Consolidation

As noted, we consolidated all of the file servers into a single storage farm. But we did need to migrate some of the data from the servers and onto the new storage. Once we migrated the data, we consolidated the streaming servers. The overall experience for our streaming customers will become much simpler.

Hardware Re-use

With the release of one of our routers, we are now able to put a test bed together. That test bed will run DD-WRT software. The process of converting the Netgear infrastructure to DD-WRT was quite tedious. It took four (4) different attempts to reset the old hardware before we could load the new software. This wasn’t anticipated. And it took us beyond the anticipated change window. Fortunately, we kept our customers informed and we were able to amend customer expectations.

Once deployed, the new network build will provide VPN services to all clients. At the same time, we will be turning up DNSSEC across the company. Finally, we will be enabling network-wide QOS and multi-casting. In short, the spare gear has given us the chance to improve our network and our ability to deliver new services.

The Rest of the Story

All of this sounds like a well-oiled plan. And it did go without any real incidents. But the scale of the effort was much smaller than you might expect. The site in Waldo was a room in a rental. The servers were a desktop, a couple of laptops, a NAS box, a cable modem, a Netgear R8000 X6 router, a Raspberry Pi, and a variety of streaming devices (like a TV, a few Chromecast devices, and the mobile phones associated with the users (i.e., members of my family.

So why would I represent this as a “data center” move? That is easy: when you move connected devices across a network (or across the country), you still have to plan for the move. More importantly, cloud services (either at the edge or within the confines of a traditional data center) must be manged as if the customer depends upon the services. And to be fair, sometimes  our families are even more stringent about loss-of-service issues than are our customers.