The Ascension of the Ethical Hacker

Hacker: The New Security Professional

Over the past year, I have seen thousands of Internet ads about obtaining an ‘ethical hacker’ certification. These ads (and the associated certifications) have been around for years. But I think that the notoriety of “Mr. Robot” has added sexiness (and legitimacy) to the title “Certified Ethical Hacker”. But what is an ‘ethical hacker’?

According to Dictionary.com, an ethical hacker is, “…a person who hacks into a computer network in order to test or evaluate its security, rather than with malicious or criminal intent.” Wikipedia has a much more comprehensive definition. But every definition revolves around taking an illegitimate activity (i.e., computer hacking) and making it honorable.

The History of Hacking

This tendency to lionize hacking began when Matthew Broderick fought against the WOPR in “WarGames”.  And the trend continued in the early nineties with the Robert Redford classic, “Sneakers”. In the late nineties, we saw Keanu Reeves as Neo (in “The Matrix”) and Gene Hackman as Edward Lyle (in “Enemy of the State”). But the hacker hero worship has been around for as long as there have been computers to hate (e.g., “Colossus: The Forbin Project”).

But as computer hacking has become routine (e.g., see “The Greatest Computer Hacks” on Lifewire), everyday Americans are now aware of their status as “targets” of attacks.  Consequently, most corporations are accelerating their investment in security – and in vulnerability assessments conducted by “Certified Ethical Hackers”.

So You Wanna Be A White Hat? Start Small

Increased corporate attacks result in increased corporate spending. And increased spending means that there is an ‘opportunity’ for industrious technicians. For most individuals, the cost of getting ‘certified’ (for CISSP and/or CEH) is out of reach. At a corporate scale, ~$15K for classes and a test is not very much to pay. But for gig workers, it is quite an investment. So can you start learning on your own?

Yes, you can start learning on your own. In fact, there are lots of ways to start learning. You could buy books. Or you could start learning by doing. This past weekend, I decided to up my game. I’ve done security architecture, design, and development for a number of years. But my focus has always been on intruder detection and threat mitigation.  It was obvious that I needed to learn a whole lot more about vulnerability assessment. But where would I start?

My starting point was to spin up a number of new virtual systems where I could test attacks and defenses. In the past, I would just walk into the lab and fire up some virtual machines on some of the lab systems. But now that I am flying solo, I’ve decided to do this the same way that hackers might do it: by using whatever I had at hand.

The first step was to set up VirtualBox on one of my systems/servers. Since I’ve done that before, it was no problem setting things up again. My only problem was that I did not have VT-x enabled on my motherboard. Once I did that, things started to move rather quickly.

Then I had to start downloading (and building) appropriate OS images. My first test platform was Tails. Tails is a privacy centered system that can be booted from a USB stick. My second platform was a Kali Linux instance. Kali is a fantastic pen testing platform – principally because it includes a Metasploit infrastructure. I even decided to start building some attack targets. Right now, I have a VM for Raspbian (Linux on the Raspberry Pi), a VM for Debian Linux, one for Red Hat Linux, and a few for Windows targets. Now that the infrastructure is built, I can begin the learning process.

Bottom Line

If you want to be an ethical hacker (or understand the methods of any hacker), then you can start without going to a class. Yes, it will be more difficult to learn by yourself. But it will be far less expensive – and far more memorable. Remember, you can always take the class later.

Do You Need A Residential Data Hub?

Data is essential for effective decision-making - even at home.
Residential Data Hubs: A Necessary Element @ Home

With more and more devices running in every home, it is becoming increasingly important to collect and manage all of the data that is available. Most people have no idea just how much data is currently being collected in their homes. But as the future arrives, almost every home will need to aggregate and assess data in order to make informed decisions and take informed actions. When that time arrives for you, you will need a “plug and play” residential data hub. Such devices will become an instrumental part of transforming your household into an efficient information processing system.

Currently, data is collected on your utility usage (e.g., electricity, water, Internet data usage, thermostat settings, etc). But few people realize that future homes will be collecting enormous amounts of data. We (at the Olsen residence and at Lobo Strategies) have exploited many of the new technologies that are part of the Internet of Things (IoT). Through this experience, it is apparent just how much data is now available. We are collecting data about where out family and team members are located. We are collecting data on the physical environment throughout our buildings – including temperature and occupancy. We are collecting information on the internal and external network resources being used by “the team.” And the amount of data being collected today will be dwarfed by the amount data that will be collected in the next few years.

The Necessity Of Residential Data Hubs

Over the past six months, we have been assembling a huge portfolio of data sources.

  • We use our DNS server logs and firewall logs to collects access-related data.
  • The Home Assistant platform collects data about all of our IoT devices.  [Note: In the past month, we’ve begun consolidating all of our IoT data into a TICK platform.]
  • Starting this week, we are now using router data to optimize bandwidth consumption.

While it is possible to manage each of these sources, it is taking quite a bit of “integration” (measured in many labor hours) to assemble and analyze this data. But we are now taking steps to assemble all of this data for easy analysis and decision-making

Consolidating Router Data

Our ISP put us in a box: they offered us an Internet “data only” package at a seriously reduced price. But buried within the contract were express limits on bandwidth.  [Note: Our recent experience has taught us that our current ISP is not a partner; they are simply a service provider. Indeed, we have learned that we will treat them as such in the future.] Due to their onerous actions, we are now on a needed content diet. And as of the beginning of the week, we have taken the needed steps to stay within the “hidden” limits that our ISP imposed.

Fortunately, our network architect (i.e., our beloved CTO) found the root cause of our excessive usage. He noted the recent changes approved by the premise CAB (i.e., our CTO’s beloved wife). And then he correlated this with the DNS log data that identified a likely source of our excess usage. This solved the immediate problem. But what about the irreversible corrective action?

And as of yesterday, we’ve also taken the steps needed for ongoing traffic analysis.

  1. We’ve exploited our premise network decisions. We normally use residential-grade equipment in our remote locations. In candor, the hardware is comparable to its pricier, enterprise brethren. But the software has always suffered. Fortunately, we’ve used DD-WRT in any premise location. By doing this, we had a platform that we could build upon.
  2. The network team deployed remote access tools (i.e., ssh and samba) to all of our premise routers.
  3. A solid-state disk drive was formatted and then added to the router’s USB 3.0 port. [Note: We decided to use a non-journaled filesystem to limit excessive read/writes of the journal itself.]
  4. Once the hardware was installed, we deployed YAMon on the premise router.
  5. After configuring the router and YAMon software, we began long-term data collection.

Next Steps

While the new network data collection is very necessary, it is not a solution to the larger problem. Specifically, it is adding yet another data source (i.e., YADS). So what is now needed is a real nexus for all of the disparate data sources. We truly need a residential data hub. I need to stitch together the DNS data, the router data, and the IoT data into a single, consolidated system with robust out-of-the-box analysis tools.  

I wonder if it is time to build just such a tool – as well as launch the services that go along with the product.

Broadband Haircut: Economics Meets Technology

Cutting the cord is a dramatic step - and a complicated one.
Cord Cutting Can Be Dangerous

I love it when I can blend my passion (for technology) and my training (in economics). Over the past six weeks, I’ve been doing just that – as I’ve tried to constrain household Internet usage. Six weeks ago, we began a voyage that has been years in the making: we’ve finally given ourselves a ‘broadband haircut’. And the keys to our (hopeful) success have been research, data collection, and data analysis.

Background

We have been paying far too much for broadband data services. And we’ve been doing this for far too many years. For us, our broadband voyage started with unlimited plans. Unlike most people, I’ve spent many years in the telecom business. And so I’ve been very fortunate to pay little (or nothing) for my wireless usage. At the same time, most household broadband was priced based upon bandwidth and not total usage. So we have always made our decisions based upon how much peak data we required at any given point in time.

But things are changing – for myself and for the industry.

First, I no longer work for a telecom. Instead, I work for myself as an independent consultant. So I must buy wireless usage in the “open” marketplace. [Note: The wireless market is only “open” because it is run by an oligopoly and not by a monopoly.]

Second, things have changed in the fixed broadband marketplace. Specifically, sanctioned, local access “monopolies” are losing market – and revenue. There is ample evidence to unequivocally state that cable companies charge too much for their services. For many years, they could charge whatever they wanted as long as they kept the local franchise in a particular municipality. But as competition has grown – mostly due to new technologies – so has the eventual downward pressure on cable revenues.

Starting a few years ago, cable companies started to treat their fixed broadband customers just as wireless operators have treated their mobile customers. Specifically, they started to impose data caps.  But for many long-term customers, they just kept paying the old (and outrageously high) prices for “unlimited” services.

“But the times, they are a changin’.”

Cord Cutting Has Increased Pressure

As more and more content delivery channels are opening up, more customers are starting to see that they are paying far too much for things that they don’t really want or need. How many times have you wondered what each of the ESPN channels is costing you? Or have you ever wondered if the H&G DIY shows are worth the price that you pay for them?

Many people have been feeling the way that you must feel. And for some, the feelings of abuse are intolerable. Bundling and price duress have infuriated many customers. Some of those customers have been fortunate to switch operators – if others are available in their area. Some customers have just cut the cord to bundled TV altogether.

And this consumer dissatisfaction has led to dissatisfaction in the board rooms of most telecom companies. But instead of reaching out to under-served customers and developing new products and new markets (both domestic and overseas), most telecom executives are looking for increases in “wallet share”; they are trying to bundle more services to increase their revenue. Unfortunately, the domestic markets are pretty much tapped out. “Peak cable” is upon most operators.

Nevertheless, some boards think that punishing their customers is the best means of revenue retention. Rather than switching to new products and new services, some operators have put debilitating caps on their customers in the hopes that they can squeeze a few more dollars from people that are already sick and tired of being squeezed. The result will be an even further erosion of confidence and trust in these corporations.

Making It Personal

Six weeks ago, we decided that it was time to cut the cord. We’ve been planning this for eighteen months. However, we had a contract that we needed to honor. But the instant that we dropped off our set top devices at Comcast, they brought out their real deals. In a matter of moments, we had gone from $125 per month (w/o fees) to $50 per month (w/o fees). So we took that deal – for one year. After all, we would be getting almost the same bandwidth for a tremendously reduced price. Ain’t competition grand?

But like most people, we didn’t know how much data we used while we were on an ‘unlimited’ plan. And in fairness, we didn’t care – until we started to see just how much data we were using. Bottom line: Once we had to pay for total consumption (and not just for peak consumption), we started to look at everything that would spin the consumption ‘meter’. And when we got the first email from Comcast indicating that we had exceeded their artificial, one terabyte (per month) cap [that was buried somewhere deep within the new contract], we began a frantic search for ‘heavy hitters’.

Make Decisions Based Upon Data
Pi-hole data points the way.
DNS Data

Our hunt for high-bandwidth consumers began in earnest. And I had a pretty good idea about where to start. First, I upped my bet on ad blocking. Most ad blockers block content after it has arrived at your device. Fortunately, my Pi-hole was blocking ads before they were downloaded. At the same time, I was collecting information on DNS queries and blocked requests. So I could at least find some evidence of who was using our bandwidth.

Pi-hole identifies largest DNS consumers.
Pi-hole Data: Biggest Ad Conveyors

After a few minutes of viewing reports, I noted that our new content streaming service might be the culprit. But when we cut the cord on cable TV, we had switched to YouTube TV (YTTV) on a new Roku device. And when I saw that device on the ‘big hitter’ list, I knew to dive deeper. I spent a few too many hours ensuring that my new Roku would not be downloading ad content. And after a few attempts, I’ve finally gotten the Pi-hole to block most of the new advertising sources. After all, why would I want to pay traffic fees for something that I didn’t even want!

The Price Of Freedom Is Eternal Vigilance

As is often the case, the first solution did not solve the real problem. Like President G.W. Bush in Gulf War II, I had prematurely declared success.  So I started to look deeper. It would have helped if I had detailed data on just which devices (and clients) were using what amounts of bandwidth.  But I didn’t have that data. At least, not then. Nevertheless, I had a sneaking suspicion that the real culprit was still the new content streamer.

Daily usage data shows dramatic usage reductions after solving Roku shutdown problem.
DD-WRT Daily Usage

After a whole lot of digging through Reddit, I learned that my new Roku remote did not actually shut off the Roku. Rather, their ‘power’ button only turned off the television set. And in the case of YouTube TV, the app just kept running. Fundamentally, we were using the Roku remote to turn the TV off at night – while the Roku device itself kept merrily consuming our data on a 7×24 basis.

The solution was simple: we had to turn off YouTube TV when we turned off the TV. It isn’t hard to do. But remembering to do it would be a challenge. After all, old habits do die hard. So I took a piece of tech from the electrical monopoly (ConEd) to solve a problem with the rapacious Internet provider.  A few months ago, we had an energy audit done. And as part of that audit, we got a couple of TrickleStar power strips. I re-purposed one of those strips so that when the TV was turned off, the Roku would be turned off as well.

What’s Next?

Now that we have solved that problem, I really do need to have better visibility on those things that can affect our monthly bill. Indeed, the self-imposed ‘broadband haircut’ is something that I must do all of the time. Consequently, I need to know which devices and applications are using just how much data. The stock firmware from Netgear provides no such information. Fortunately, I’m not running stock firmware. By using DD-WRT, I do have the ability to collect and save usage data.

To do this, I first need to attach an external USB  drive to the router. Then I need to collect this data and store it on the external drive. Finally, I need to routinely analyze the data so that I can keep on top of new, high-bandwidth consumers as they emerge.

Bottom Line

Economics kicked off this effort. Data analysis informed and directed this effort. With a modest investment (i.e., Pi-hole, DD-WRT, an SSD drive, and a little ingenuity), I hope to save over a thousand dollars every year.  And I am not alone. More and more people will demand a change from their operators – or they will abandon their operators altogether.

If you want to perform a similar ‘broadband haircut’, the steps are easier than they used to be. But they are still more difficult than they should be. But there is one clear piece of advice that I would offer: start planning your cable exit strategy.

Home Assistant Portal: TNG

Over the past few months, I have spent much of my spare time deepening my home automation proficiency.  And most of that time has been spent understanding and tailoring Home Assistant. But as of this week, I am finally at a point where I am excited to share the launch of my Home Assistant portal. 

Overview

Some of you may not be familiar with Home Assistant (HA). So let me spend one paragraph outlining the product. HA is an open source “home” automation hub. As such, it can turn your lights on and off, manage your thermostat, open/close your garage door (and window blinds). And it can manage your presence within (and around) your home. And it works with thousands of in-home devices. It provides an extensive automation engine so that you can script countless events that occur throughout your home.  It securely integrates with key cloud services (like Amazon Alexa and Google Assistant). Finally, it is highly extensible – with a huge assortment of add-ons available to manage practically anything.

Meeting Project Goals

Today, I finished my conversion to the new user interface (UI). While there have been many ways to access the content within HA before now, the latest UI (code-named Lovelace) make it possible to create a highly customized user experience. And coupled with the theme engine baked into the original UI (i.e., the ‘frontend’), it is possible to make a beautiful portal to meet your home automation needs.

In addition to controlling all of the IoT (i.e., Internet of Things) devices our home, I have baked all sorts of goodies into the portal. In particular, I have implemented (and tailored) the data collection capabilities of the entire household. At this time, I am collecting key metrics from all of my systems as well as key state changes for every IoT device. In short, I now have a pretty satisfying operations dashboard for all of my home technology.

Bottom Line

Will my tinkering end with this iteration? If you know me, then you already know the answer. Continuous process improvement is a necessary element for the success of any project. So I expect rapid changes will be made almost all of the time – and starting almost immediately. And as a believer in ‘agile computing’ (and DevOps product practices), I intend to include my ‘customer(s)’ in every change. But with this release, I really do feel like my HA system can (finally) be labeled as v1.0! 

Time Series Data: A Recurring Theme

When I graduated from college (over three-and-a-half decades ago), I had an eclectic mix of skills. I obtained degrees in economics and political sciences. But I spent a lot of my personal time working on building computers and writing computer programs. I also spent a lot of my class time learning about econometrics – that is, the representation of economic systems in mathematical/statistical models. While studying, I began using SPSS to analyze time series data.

Commercial Tools (and IP) Ruled

When I started my first job, I used SPSS for all sorts of statistical studies. In particular, I built financial models for the United States Air Force so that they could forecast future spending on the Joint Cruise Missile program. But within a few years, the SPSS tool was superseded by a new program out of Cary, NC. That program was the Statistical Analysis System (a.k.a., SAS). And I have used SAS ever since.

At first, I used the tool as a very fancy summation engine and report generator. It even served as the linchpin of a test-bed generation system that I built for a major telecommunications company. In the nineties, I began using SAS for time series data analysis. In particular, we piped CPU statistics (in the form of RMF and SMF data) into SAS-based performance tools.

Open Source Tools Enter The Fray

As the years progressed, my roles changed and my use of SAS (and time series data) began to wane. But in the past decade, I started using time series data analysis tools to once again conduct capacity and performance studies. At a major financial institution, we collected system data from both Windows and Unix systems throughout the company. And we used this data to build forecasts for future infrastructure acquisitions.

Yes, we continued to use SAS. But we also began to use tools like R. R became a pivotal tool in most universities. But many businesses still used SAS for their “big iron” systems. At the same time, many companies moved from SAS to Microsoft-based tools (including MS Excel and its pivot tables).

TICK Seizes Time Series Data Crown

Over the past few years, “stack-oriented” tools have emerged as the next “new thing” in data centers. [Note: Stacks are like clouds; they are everywhere and they are impossible to define simply.] Most corporations have someone’s “stack” running their business – whether it be Amazon AWS, Microsoft Azure, Docker, Kubernetes, or a plethora of other tools.  And most commercial ventures are choosing hybrid stacks (with commercial and open source components).

And the migration towards “stacks” for execution is encouraging the migration to “stacks” for analysis. Indeed, the entire shift towards NoSQL databases is being paired with a shift towards time series databases.  Today, one of the hottest “stacks” for analysis is TICK (i.e., Telegraf, InfluxDB, Chronograf, and Kapacitor).

TICK Stack @ Home

Like most projects, I stumbled onto the TICK stack. I use Home Assistant to manage a plethora of IoT devices. And as the device portfolio has grown, my need for monitoring these devices has also increased. A few months ago, I noted that an InfluxDB add-on could be found for HassIO.  So I installed the add-on and started collecting information about my Home Assistant installation.

Unfortunately, the data that I collected began to exceed my capacity to store the data on the SD card that I had in my Raspberry Pi. So after running the system for a few weeks, I decided to turn the data collection off – at least until I solved some architectural problems. And so the TICK stack went on the back burner.

I had solved a bunch of other IoT issues last week. So this week, I decided to focus on getting the TICK stack operational within the office. After careful consideration, I concluded that the test cases for monitoring would be a Windows/Intel server, a Windows laptop, my Pi-hole server, and my Home Assistant instance.

Since I was working with my existing asset inventory, I decided to host the key services (or daemons) on my Windows server. So I installed Chronograf, InfluxDB, and Kapicitor onto that system. Since there was no native support for a Windows service install, I used the Non-Sucking Service Manager (NSSM) to create the relevant Windows services. At the same time, I installed Telegraf onto a variety of desktops, laptops, and Linux systems. After only a few hiccups, I finally got everything deployed and functioning automatically. Phew!

Bottom Line

I implemented the TICK components onto a large number of systems. And I am now collecting all sorts of time series data from across the network. As I think about what I’ve done in the past few days, I realize just how important it is to stand on the shoulders of others. A few decades ago, I would have paid thousands of dollars to collect and analyze this data. Today, I can do it with only a minimal investment of time and materials. And given these minimal costs, it will be possible to use these findings for almost every DevOps engagement that arises.

Continuous Privacy Improvement

In its latest release, Firefox extends its privacy advantage over other browsers. Their efforts at continuous privacy improvement may keep you ahead of those who wish to exploit you.
Firefox 63 Extends Privacy Lead

In the era of Demmings, the mantra was continuous process improvement. The imperative to remain current and always improve continues even to this day. And as of this morning, the Mozilla team has demonstrated its commitment to continuous privacy improvement; the release of Firefox 63 is continuing the commitment of the entire open source community to the principle that Internet access is universal and should be unencumbered.

Nothing New…But Now Universally Available

I’ve been using the new browsing engine (in the form of Firefox Quantum) for quite some time. This new engine is an incremental improvement upon previous rendering engines. In particular, those who enabled tracker protection often had to deal with web sites that would not render very successfully. It then became a trade-off between privacy and functionality.

But now that the main code branch has incorporated the new engine, there is more control over tracker protection. And this control will allow those who are concerned about privacy to still use some core sites on the web. This new capability is not fully matured. But in its current form, many new users can start to implement protection from trackers.

Beyond Rendering

But my efforts at continuous privacy improvement are also including enhanced filtering from my Pi-hole DNS platforms. The Pi-hole has faithfully blocked ads for several years. But I’ve decided to up the ante a bit.

  1. I decided to add regular expressions to increase the coverage of ad blocking. I added the following regex filters:
         
         ^(.+[-_.])??ad[sxv]?[0-9]*[-_.]
         ^adim(age|g)s?[0-9]*[-_.]
         ^adse?rv(e(rs?)?|ices?)?[0-9]*[-.]
         ^adtrack(er|ing)?[0-9]*[-.]
         ^advert(s|is(ing|ements?))?[0-9]*[-_.]
         ^aff(iliat(es?|ion))?[-.]
         ^analytics?[-.]
         ^banners?[-.]
         ^beacons?[0-9]*[-.]
         ^clicks?[-.]
         ^count(ers?)?[0-9]*[-.]
         ^pixels?[-.]
         ^stat(s|istics)?[0-9]*[-.]
         ^telemetry[-.]
         ^track(ers?|ing)?[0-9]*[-.]
         ^traff(ic)?[-.]
  2.      
  3. My wife really desires to access some sites that are more “relaxed” in their attitude. Consequently, I set her devices to use the Cloudfare DNS servers (i.e., 1.1.1.1, and 1.0.0.1). I then added firewall rules to block all Google DNS access. This should allow me to bypass ads embedded in Google devices that configure Goggle’s DNS (e.g., Chromecast, Google Home, etc). I then added these rules to my router.

         iptables -I FORWARD –destination 8.8.8.8 -j REJECT
         iptables -I FORWARD –destination 8.8.4.4 -j REJECT

These updates now block ads on my Roku devices and on my Chromecast devices.

Bottom Line

In the fight to ensure your privacy, it is not enough to “fire and forget” with a fixed set of tools. Instead, you must always be prepared to improve your situation. After all, advertisers and identity thieves are always trying to improve their reach into your wallet. Show them who the real boss is. It should be (and can be) you!

Youtube Outage Weakens Trust

Youtube Outage Damages Trust
Youtube Outage

Why do we trust cloud services? That’s simple: We trust cloud service providers because we don’t trust ourselves to build and manage computer services – and we desperately want the new and innovative services that cloud providers are offering. But trust is a fleeting thing. Steve Wozniak may have said it best when he said, “Never trust a computer you can’t throw out a window.” Yet how much of our lives is now based upon trusting key services to distant providers? Last night confirmed this reality for many people; the great Youtube outage of October 16 may have diminished the trust that many people had in cloud services.

A Quiet Evening…

It was chilly last evening. After all, it is October and we do live in Chicago. So neither Cindy nor I were surprised. Because it is becoming cold, we are starting to put on our more sedentary habits. Specifically, we have been having soups and chili. And last night, we had brats in marinara sauce. After dinner, we settled down to watch a little television. Cindy was going to watch “This Is Us” while I wanted to catch up on “Arrow”.

Everything was going serenely.

It had not been so the previous evening. We were having some trouble with one of the new Roku enhanced remotes. These devices use WiFi Direct rather than IR. And my specialized WiFi configuration was causing trouble for the remote. It was nothing serious. But I like things solved. So I spent  six (6) hours working on a new RF implementation for my router. [Note: At 0130CST, I abandoned that effort and went back to my ‘last known good’ state on the router.]

…gone terribly wrong!

Yesterday morning brought a new day. I had solved the problems that I had created on Monday evening. Now, everything was working well – until the television stopped working. While I was watching “Arrow” and Cindy was watching “This Is Us”, I started getting errors in the YoutubeTV stream. Then I heard my wife ask the dreaded question: “Is there something wrong with the television?”  And my simple response was, “I’ll check.”

At first, I thought that it might have been the new ISP hookup. It wasn’t. Then I wondered if it was something inside the house. Therefore, I started a Plex session on the Roku so that Cindy could watch “Ant-man and the Wasp” while I dug deeper. Of course, that worked well. So I knew that there must have been a different problem occurring.  I wondered if YoutubeTV was the problem? So I tried it while disconnected from our network (i.e., on my phone which is on the T-Mobile network).  When that didn’t work, I knew that we were part of a larger problem. My disappointment grew because we had just switched from cable TV to streaming YoutubeTV. But it was Google. So I figured it would be solved quickly.

I decided to catch up on a few Youtube channels that I follow. And I couldn’t reach them either. My disappointment grew into astonishment: could Google be having such a widespread problem? Since I had network  connectivity, I searched DuckDuckGo and found many links to the outage. And we just happened to use all of the affected services (i.e., Youtube and YoutubeTV). My wife was happy to watch the movie. And I was happy to move onto something else – like Home Assistant.

And Then The Youtube Outage Occurred

As I started to think about this outage, I wondered what might have caused it. And I mentally recited operations protocols that I would use to find the root cause and to implement irreversible corrective actions. But those steps were currently being taken by Google staff. So I focused on what this might mean to end users (like myself). What will I do with this info? First, I can no longer assume that “Google couldn’t be the problem.” In one stroke, years of trust were wiped away. And with the same stroke, days of trust in the YoutubeTV platform were discarded. Unfortunately, Google will be the first thing I check when I go through my problem-solving protocols. 

Eventually, I will rebuild that lost trust – if Google is transparent in their communications concerning the Youtube outage. Once I learn what really happened, I can let time heal the trust divide. But if Google is not transparent, then distrust will become mistrust. Here’s hoping that Google doesn’t hide it’s troubles. In the meantime, their customers should demand that Google fully explain what happened.

I Am Not A Product!

I have been a technology “early adopter” all of my life. And I have been a “social media” adopter since its inception. Indeed, I joined Twitter in the fall of 2006 (shortly after its launch in July 2006). I was also an early adopter of Facebook. And in the early days, I (and many others) thought of these platforms as the eventual successors to email. But as of this moment, I am now one of the large stream of people abandoning these platforms.

Why am I abandoning these platforms? They do have some value, right? As a technologist, they do “connect” me to other technologists. But it seems that even as I become more connected to many of these platforms, I am becoming even more disconnected from the community in which I live. 

At the same time, these platforms are becoming more of a personal threat. This week, we learned of yet another data breach at Facebook. I am sure that there are millions of people that have been compromised – again. After the first breach, I could make a case that Facebook would improve their system. But after the numerous and unrelenting breaches, I can no longer make a case that I am “safe” when I use these platforms.

Finally, these platforms are no longer fostering unity. Instead, they are making it easy to be lax communicators. We can abandon the civility of face-to-face dialog. And we can dismiss those with whom we disagree because we do not directly interact with them. Consequently, we do not visualize them as people but as “opponents”.

Social media was supposed to be about community. It was also supposed to be a means of engaging in disagreement without resorting to disunity. Instead, most social media platforms have degenerated into tribalism. And for my part in facilitating this devolution, I am exceedingly sorry.

I will miss a lot of things by making this stand. Indeed, my “tribe” (which includes my family) has come to rely upon social media. But I can no longer be part of such a disreputable and inharmonious ecosystem. 

Hopefully, I won’t miss it too much.

By the way, one of the most important benefits of disconnecting from the Matrix is that my personal life, my preferences, and my intentions will no longer be items that can be sold to the highest bidder. It is well said that “if you are not paying for the product, then you probably are the product.” So I’m done with being someone else’s product.

As for me, I am taking the red pill. Tata, mes amis

#FarewellFacebook

VPNFilter Scope: Talos Tells A Tangled Tale

IoT threats
Hackers want to take over your home.

Several months ago, the team at Talos (a research group within Cisco) announced the existence of VPNFilter – now dubbed the “Swiss Army knife” of malware. At that time, VPNFilter was impressive in its design. And it had already infected hundreds of thousands of home routers. Since the announcement, Talos continued to study the malware. Last week, Talos released its “final” report on VPNFilter. In that report, Talos highlighted that the VPNFilter scope was/is far larger than first reported.

“Improved” VPNFilter Capabilities

In addition to the first stage of the malware, the threat actors included the following “plugins”:

  • ‘htpx’ – a module that redirects and inspects the contents of unencrypted Web traffic passing through compromised devices.
  • ‘ndbr’ – a multifunctional secure shell (SSH) utility that allows remote access to the device. It can act as an SSH client or server and transfer files using the SCP protocol. A “dropbear” command turns the device into an SSH server. The module can also run the nmap network port scanning utility.
  • ‘nm’ – a network mapping module used to perform reconnaissance from the compromised devices. It performs a port scan and then uses the Mikrotik Network Discovery Protocol to search for other Mikrotik devices that could be compromised.
  • ‘netfilter’ – a firewall management utility that can be used to block sets of network addresses.
  • ‘portforwarding’ – a module that allows network traffic from the device to be redirected to a network specified by the attacker.
  • ‘socks5proxy’ – a module that turns the compromised device into a SOCKS5 virtual private network proxy server, allowing the attacker to use it as a front for network activity. It uses no authentication and is hardcoded to listen on TCP port 5380. There were several bugs in the implementation of this module.
  • ‘tcpvpn’ – a module that allows the attacker to create a Reverse-TCP VPN on compromised devices, connecting them back to the attacker over a virtual private network for export of data and remote command and control.
Disaster Averted?

Fortunately, the impact of VPNFilter was blunted by the Federal Bureau of Investigation (FBI). The FBI recommended that every home user reboot their router. The FBI hoped that this would slow down infection and exploitation. It did. But it did not eliminate the threat.

In order to be reasonably safe, you must also ensure that you are on a version of router firmware that protects against VPNFilter. While many people heeded this advice, many did not. Consequently, there are thousands of routers that remain compromised. And threat actors are now using these springboards to compromise all sorts of devices within the home. This includes hubs, switches, servers, video players, lights, sensors, cameras, etc.

Long-Term Implications

Given the ubiquity of devices within the home, the need for ubiquitous (and standardized) software update mechanisms is escalating. You should absolutely protect your router as the first line of defense. But you also need to routinely update every type of device in your home.

Bottom Line
  1. Update your router! And update it whenever there are new security patches. Period.
  2. Only buy devices that have automatic updating capabilities. The only exception to this rule should be if/when you are an accomplished technician and you have established a plan for performing the updates manually.
  3. Schedule periodic audits of device firmware. Years ago, I did annual battery maintenance on smoke detectors. Today, I check every device at least once a month. 
  4. Retain software backups so that you can “roll back” updates if they fail. Again, this is a good reason to spend additional money on devices that support backup/restore capabilities. The very last thing you want is a black box that you cannot control.

As the VPNFilter scope and capabilities have expanded, the importance of remediation has also increased. Don’t wait. Don’t be the slowest antelope on the savanna.

Social Media Schisms Erupt

A funny thing happened on the way to the Internet: social media schisms are once again starting to emerge. When I first used the Internet, there was no such thing as “social  media”. If you were a defense contractor, a researcher at a university, or part of the telecommunications industry, then you might have been invited to participate in the early versions of the Internet. Since then, we have all seen early email systems give way to bulletin boards, Usenet newsgroups, and early commercial offerings (like CompuServe, Prodigy, and AOL). These systems  then gave way to web servers in the mid-nineties.  And by the late nineties, web-based interactions began to flourish – and predominate.

History Repeats Itself

Twenty years ago, people began to switch from AOL to services like MySpace. And just after the turning of the millennium, services like Twitter began to emerge. At the same time, Facebook nudged its way from a collegiate dating site to a full-fledged friendship engine and social media platform. With each new turning of the wheel of innovation, the old has been vanquished by the “new and shiny” stuff.  It has always taken a lot of time for everyone to hop onto the new and shiny from the old and rusty. But each iteration brought something special.

And so the current social media title holders are entrenched. And the problem with their interaction model has been revealed. In the case of Facebook and Twitter, their centralized model may very well be their downfall. By having one central system, there is only one drawbridge for vandals to breach. And while there are walls that ostensibly protect you, there is also a royal guard that watches everything that you do while within the walls. Indeed, the castle/fortress model is a tempting target for enemies (and “friends”) to exploit.

Facebook (and Twitter) Are Overdue

The real question that we must all face is not if Facebook and Twitter will be replaced, but when will it happen. As frustration has grown with these insecure and exposed platforms, many people are looking for an altogether new collaboration model. And since centralized systems are failing us, many are looking at decentralized systems.

A few such tools have begun to emerge. Over the past few years, tools like Slack are starting to replace the team/corporate systems of a decade ago (e.g., Atlassian Jira and Confluence). For some, Slack is now their primary collaboration engine. And for the developers and gamers among us, tools like Discord are gaining notoriety – and membership.

Social Media Schisms Are Personal

But what of Twitter and what of Facebook?  Like many, I’ve tried to live in these walled gardens. I’ve already switched to secure clients. I’ve used containers and proxies to access these tools. And I have kept ahead of the wave of insecurity – so far. But the cost (and risk) is starting to become too great. Last week, Facebook revealed that it had been breached – again. And with that last revelation, I decided to take a Facebook break.

My current break will be at least two weeks. But it will possibly be forever. That is because the cost and risk of these centralized systems is becoming higher than the convenience that these services provide.  I suspect that many of you may find yourselves in the same position.

Of course, a break does not necessarily mean withdrawal from all social media. In fairness, these platforms do provide value. But the social media schisms have to end. I can’t tolerate the politics of some of my friends. But they remain my friends (and my family) despite policy differences that we may have. But I want to have a way of engaging in vigorous debate with some folks while maintaining collegiality and a pacific mindset while dealing with others.

So I’m moving on to a decentralized model. I’ve started a Slack community for my family. My adult kids are having difficulty engaging in even one more platform. But I’m hopeful that they will start to engage. And I’ve just set up a Mastodon account (@cyclingroo@mastodon.cloud) as a Twitter “alternative”. And I’m becoming even more active in Discord (for things like the Home Assistant community).

All of these tools are challengers to Facebook/Twitter. And their interaction model is decentralized. So they are innately more secure (and less of a targeted threat). The biggest trouble with these systems is establishing and maintaining an inter-linked directory.

A Case for Public Meta-directories

In a strange way, I am back to where I was twenty years ago. In the late nineties, my employer had many email systems and many directories. So we built a directory of directories. Our first efforts were email-based hub-and-spoke directories based upon X.500. And then we moved to Zoomit’s Via product (which was later acquired by Microsoft). [Note: After purchase, Microsoft starved the product until no one wanted its outdated technologies.] These tools served one key purpose: they provided a means of linking all directories together

Today, this is all  done through import tools that any user can employ to build personalized contact lists. But as more people move to more and different platforms, the need for a distributed meta–directory has been revealed. We really do need a public white pages model for all users on any platform.

Bottom Line

The value of a directory of directories (i.e., a meta-directory) still exists. And when we move from centralized to decentralized social media systems, the imperative of such directory services becomes even more apparent. At this time, early adopters should already be using tools like Slack, Discord, and even Mastodon. But until interoperability technologies (like meta-directories) become more ubiquitous, either you will have to deal with the hassle of building your own directory or you will have to accept the insecurity inherent in a centralized system.