Mozilla Browser Deal – A Bewildering Compromise You Must Make

Google + Mozilla

I spent several hours this weekend on Reddit. I’ve been discussing Mozilla and their future. After finally de-googling my mobile life, I have been confronted by one simple truth: the organization that provides my default browser is now a larger threat to my safety than most other threat actors that I now face. Why is that? That’s simple. The new Google and Mozilla browser deal ensures that private data – my private data and your private data – will be collected by Mozilla and then delivered to Google.

Does This Make Any Sense?

From Google’s viewpoint, this makes perfect sense. They can make sure that their search engine is still the default search engine for a large number of Firefox users. For most folks, search engines don’t make sense. And if you don’t understand something, you usually don’t try to change it. All of us know the adage that if it isn’t broken, then you shouldn’t fix it. For this reason, most people don’t touch the browser that they use. This deal extension plays right into Newton’s First Law: the law of inertia. This extension enshrines a docile Mozilla. It ensures that they will act as the “dutiful competitor” whenever state and federal regulators get too inquisitive.

From Mozilla’s viewpoint, this also makes sense. The Firefox market share is shrinking – and has been shrinking for years. And Mozilla just laid off hundreds of employees. Obviously, they are not going to innovate their way out of their death spiral. Some think that this deal simply provides sufficient financial cushion for the leaders of the Mozilla Foundation to land on their feet.

From A Personal Vantage Point

When you have a list and you move things off of that list, new things surface as “the most important” thing that you must address. So as I’ve reduced my risks from Google, something else had to take Google’s place. And at this moment, it is the browser. More specifically, it is Mozilla’s browser.

Firefox for Android has four (4) embedded trackers. These include: Adjust, Google AdMob, Google Firebase Analytics, and LeanPlum. Half of these trackers report data directly to Google. So after recently breaking the chain that kept me in Google’s sway, I am now left with someone else taking the very same data treasure and “gifting” it right back to Google. Given their financial peril, I truly doubt that the Mozilla Foundation will be convinced to remove these trackers on my behalf.

A Historical Alliance

All of this makes some historical sense. When the Mozilla Foundation first started, they were fighting Microsoft. Today, I am sure that many of the people at Mozilla still see Google as “the enemy of my enemy” and not just “the ‘new’ enemy”. 

But the times have changed, right? Microsoft was cowed. And Google rose triumphantly – as did the Mozilla Foundation. Nevertheless, one thing remains the same. There is a ravenous competitor prowling the field. And the consumers that were threatened before are threatened once again. But this time, it is Google that needs to be cowed.

What’s A Geek To Do?

My situation is simple: my most important mobile app is my browser. And this product is now siphoning private information into Google’s coffers. I can’t tolerate this. So I’ve been struggling with this all weekend.What can I do?

  1. I could continue to use the Fennec variant available on F-Droid. This will work. But this product is EOL. And so there will be no new versions. So while I can keep on using this product, I am living on borrowed time.
  2. I could change my browser. There are some very good browsers that meet some very specific needs. I could use Chromium – or any one of a number of derivative works. But it is very difficult to cross this bridge. After all, Chromium is the basis for all of Google’s proprietary browser investments.
  3. I could also use any of a bunch of browsers that are descended from Firefox. IceCast is one such descendant. It is a good browser that is built upon Gecko. And it is actively being maintained.  They are trying to keep up with Firefox. But their next update probably won’t happen until the Mozilla folks lay down their next “extended support release” (or ESR). Consequently, this release is intentionally behind the times as the last ESR release is quite dated.
  4. I could use another browser that is not part of either legacy. But to be fair, there are very few new options that fall into this category.
  5. I could switch and use the “new” Firefox for Android. This one stings. I am emotionally hurt by the gyrations that Mozilla is inflicting upon their users. Nevertheless, their new version is a very good browser – albeit with several Google trackers. Fortunately, I can neutralize those trackers. By using Pihole, I can ensure that connections made to named Google services will not be properly resolved. In this way, I can have Firefox and still block Google – at least until Mozilla defeats this DNS-oriented defense.
Bottom Line

So what will I do? For now, I’m switching from Fennec F-Droid to Firefox for Android. And I’ve reviewed all of the adlists included on my Pihole. For now, I can use Mozilla Firefox while still intercepting any private data being fed to Google.

Is the Mozilla browser deal good for me? It absolutely is not. Is the deal good for the industry? It probably isn’t. Will I make a temporary compromise until a better solution emerges? Yes, I will make that compromise. But I am altogether unhappy living in this compromised state.

CPI: Continuous Privacy Improvement – Part 2

Continuous Improvement
Continuous Improvement

Continuous Improvement is nothing new. In the early nineties, total quality management (TQM) was all the rage. And even then, TQM was a re-visitation of techniques applied in preceding decades. Today, continuous improvement is embraced in nearly every development methodology. But whether from the “fifties” or the “twenties”, the message is still the same: any measurable improvement (whether in processes or in technologies) is the result of a systematic approach. This is true for software development. And it is true for continuous privacy improvements.

Privacy Is Threatened

With every wave of technology change, there have been concurrent improvements in determining what customers desire – and what they will “spend” in order to obtain something. At the same time, customers have become increasingly frustrated with corporate attempts to “anticipate” their “investment” habits. For example, the deployment of GPS and location technologies has allowed sellers to “reach” potential customers whenever those customers are physically near the point of sale. In short, when you got to the Magnificent Mile in Chicago, you’ll probably get adds for stores that are in your vicinity.

While some people find this exhilarating, many people find it frustrating. And some see these kinds of capabilities as demonstrative of a darker capability: the ability for those with capability to monitor and manage the larger populace. For some, the “sinister” people spying on them are corporations. For many, the “malevolent” forces that they fear are shadowy “hackers” that can steal (or have already stolen) both property and identity. And for a very small group of people, the powers that they fear most are governments and / or similar authorities. For everyone, the capability to monitor and influence behavior is real.

Surveillance And Exploitation Are Not New

Governments have tried to “watch” citizens – whether to protect them from threats or to “manage” them into predetermined behaviors. You can look at every society and see that there have always been areas of our life that we wish to keep private. And balanced against those desires are the desires of other people. So with every generation (and now with every technology change), the dance of “personal privacy” and “group management” is renewed.

As the technology used for surveillance has matured, the tools for ensuring privacy have also changed. And the methods for ensuring privacy today have drastically changed from the tools used even a few years ago. And if history is a good predictor of the future, then we can and should expect that we must continually sharpen our tools for privacy – even as our “adversaries” are sharpening their tools of surveillance. Bottom Line: The process of maintaining our privacy is subject to continuous threat and must be handled in a model akin to continuous process improvement. So let’s start accepting the need for continuous privacy improvement.

Tackling Your Adversaries – One At A Time

If you look at the state of surveillance, you probably are fatigued by the constant fight to maintain your privacy. I know that I am perpetually fatigued. Every time that you harden your defenses, new threats emerge. And the process of determining your threats and your risks seems to be never-ending. And in truth, it really is never-ending. So how do you tackle such a problem? I do it systematically.

As an academic (and lifetime) debater – as well as a trained enterprise architect – I continually assess the current state. That assessment involves the following activities:

  • Specify what the situation is at the present moment.
  • Assess the upsides and downsides of the current situation.
  • Identify those things that are the root causes of the current situation.
  • Outline what kind of future state (or target state) would be preferable.
  • Determine the “gaps” between the current and future states.
  • Develop a plan to address those gaps (and their underlying problems).

And there are many ways to build plans. Some folks love the total replacement model. And while this is feasible for some projects, it is rarely practical for our personal lives. [Note: There are times when threats do require a total transformation. But they are the exception and not the general rule.] Since privacy is such a fundamental part of our lives, we must recognize that changes to our privacy posture must be made incrementally – and continuously. Consequently, we must understand the big picture and then attack in small and continuous ways. In military terms, you want to avoid multi-front campaigns at all cost. Both Napoleon and Hitler eschewed this recommendation. And they lost accordingly.

My Current State – And My Problems

I embarked on my journey towards intentional privacy a few years ago. I’ve given dozens of talks about privacy and security to both IT teams and to personal acquaintances. And I’ve made it a point to chronicle my personal travails along my path to a more private life. But in order to improve, I needed to assess what I’ve done – and what remains to be done.

So here goes…

Over the past two years, I’ve switched my primary email provider. I’ve changed my search providers and my browsers – multiple times. And I’ve even switched from Windows to Linux. But my transformation has always been one step away from its completion.

The Next (to Last) Step: De-googling

This year, I decided to address the elephant in the room: I decided to take a radical step towards removing Google from my life. I’ve been using Google products for almost half of my professional life. Even though I knew that Google was one of the largest threat actors my ecosystem, I still held on to to a Google lifeline. Specifically, I was still using a phone based upon Google’s ecosystem. [Note: I did not say Android. Because Android is a Linux-oriented phone that Google bought and transformed into a vehicle for data collection and advertising delivery.]

I had retained my Google foothold because I had some key investments that I was unwilling to relinquish. The first of these was a Google Voice number that had been at the heart of my personal life (and my business identity). That number was coupled with my personal Google email identity. It was the anchor of hundreds of accounts. And it was in the address books of hundreds of friends, relatives, colleagues, customers, and potential customers.

Nevertheless, the advantages of keeping a personal Google account were finally outweighed by my firm realization that Google wasn’t giving me an account for free; Google was “giving” me an account to optimize their advertising delivery. Or stated differently, I was willing to sell unfettered access to myself as long as I didn’t mind relinquishing any right to privacy. And after over fifteen years with the same account, I was finally ready to reclaim my right to privacy.

Too Many Options Can Lead To Inaction

I had already taken some steps to eliminate much of the Google stranglehold on my identity. But they still had the lynch pins:

  • I still had a personal Google account, and
  • Google had unfettered access to my mobile computing platform.

So I had to break the connection from myself to my phone. I carefully considered the options that were available to me.

  1. I could switch to an iPhone. Without getting too detailed, I rejected this option as it was simply trading one master for another one. Yes, I had reason to believe that Apple was “less” invasive than Google. But Google was “less” invasive at one point in time. So I rejected trading one for another.
  2. I could install a different version of Android on my current phone. While I have done this in the past, I was not able to do this with my current phone. I had bought a Samsung Galaxy S8+ three years ago. And when I left Sprint for the second time (due to the impending merger), I kept the phone. But this phone was based upon the Qualcomm SnapDragon 855. Consequently, the phone had a locked bootloader. And Qualcomm has never relented and unlocked the bootloader. So I cannot flash a new ROM (like LineageOS) on this phone.
  3. I could install a different version of Android on a new phone. This option had some merit – at the cost of purchasing new phone hardware. I could certainly buy a new (or used) phone that would support GraphenOS or LineageOS. But during these austere times (when consulting contracts are sparse), I will not relinquish any coin of the realm to buy back my privacy. And buying a Pixel sounds more like paying a ransomware demand that buying something of value.
  4. I could take what I had and live with it. Yes, this is the default option. And while I diddled with comparisons, this WAS what I did for over a year. After all, it fell into the adage that if it isn’t broken, then why fix it? But such defaults never last – at least, not for me.
  5. I could use the current phone and take the incremental next step in using a phone with a locked bootloader: I could eliminate the Google bits by eliminating the Google account and by uninstalling (and/or disabling) Google, Samsung, and T-Mobile apps using the Android Debug Bridge (a.k.a., adb).

I had previously decided to de-google my phone before my birthday (in July). So once Independence Day came and went, I got serious about de-googling my phone.

The Road Less Taken

Of all of the options available to me, I landed on the one that cost the least amount of my money but required the most investment of my personal time. So I researched many different lists of Google apps (and frameworks) on the Samsung Galaxy S8+. I first disabled the apps that I had identified. Then I used a tool available on the Google Play Store called Package Disabler Pro. I have used this before. So I used it again to identify those apps that I could readily disable. By doing this, I could determine the full impact of deleted some of these packages – before I actually deleted them. Once I had developed a good list and had validated that the phone would still operate, I made my first attempt.

And as expected, I ran into a few problems. Some of them were unexpected. But most of them were totally expected. Specifically, Google embeds some very good technology in the Google Play Services (gms) and Google Services Framework (gsf). And when you disable / delete these tools, a lot of apps just won’t work completely. This is especially true with notifications.

I also found out that there were some key multimedia messaging services (MMS) capabilities that I was using without realizing it. So when I deleted these MMS tools, I had trouble with some of my routine multi-recipient messages. I solved this by simply re-installing those pieces of software. [Note: If that had not worked, then I was ready to re-flash to a baseline T-Mobile ROM. So I had multiple fallback plans. Fortunately, the re-installation solved the biggest problem.]

Bottom Line

After planning for the eventual elimination of my Google dependence, I finally took the necessary last step towards a more private life; I successfully de-googled my phone – and my personal life. Do I still have some interaction with Google? Of course I do. But those interactions are far less substantial, far more manageable, and far more private. At the same time, I have eliminated a large number of Samsung and T-Mobile tracking tools. So my continuous privacy improvement process (i.e., my intentional privacy improvements) has resulted in a more desirable collaboration between myself and my technology partners.

Unexpected Changes Are Common For Most IT Teams

Nginx Raided

When is it time to consider your next infrastructure change? Sometimes, you have a chance to plan for such changes. Other times, the imperative of such a change may be thrust upon you. While never welcome, unexpected changes are a normal part of the IT experience. But how many of us have had to deal with intellectual property issues that resulted in a police raid?

Last month, we learned that Russian authorities raided the Moscow offices of Nginx. Apparently, another Russian company (i.e., the Rambler Group) claimed that the intellectual property behind nginix belongs to them. So while hundreds of thousands of sites have been using nginx under the apparent misconception that nginx was open source, the Russian police have played a trump card on all of us. Whether the code belongs to Rambler or to Nginx/F5 is unclear. But what is known is altogether clear and inescapable: the long-term future of nginx is now in jeopardy.

The Search For Alternatives

Whenever I’m confronted with any kind of unexpected change, I start to perform my normal analysis: identification of the problem, validation of the inherency / causality of the problem, and an assessment of all of the alternatives. In this case, the political instability of another country has resulted in a dramatically increased risk to the core infrastructure of many of our clients. The long-term consequences of these risks could be dramatic. Fortunately, a number of alternatives are available.

First, you could always stand pat and wait to see what happens. After all, it costs little (or nothing) to keep the code running. And this raid may just be a tempest in a tea cup. The code is still available. And people could certainly fork the code and extend the current code based upon an established baseline. Unfortunately, this alternative probably won’t address the core problem: IP litigation can take a long time to determine final outcomes. And international litigation can take even longer. So the probability that the current code base (and any derivatives) could be in sustained jeopardy is relatively high. And while people are fighting over ownership, very few new developers will stake their reputation on an shaky foundation. So the safe bet is that the nginx code base will remain static for the foreseeable future.

Second, you could build your own solution. You and your team could take it upon yourselves to construct a purpose-built reverse proxy. Such an effort would result in a solution that meets all of your organization’s needs. Of course, it can be an unnecessarily expensive venture that might (or might not) deliver a solution when you need one. And if you need a solution right now, then building your own solution is probably out of the question.

Nevertheless, you could always speed up the process by hiring an “expert” to code a solution to your specific needs. Again, this will take time. Realistically, building a custom solution is only necessary if you want to maintain a competitive advantage over other people and organizations. So if you need something that is generic and that already exists in the market, then building it makes little (or no) sense.

Third, you could assay the field and determine if an alternative already exists. And in the case of reverse proxies, there are several alternatives that you could consider. And the most notable of these alternative is the traefik (pronounce traffic) reverse proxy.

Like nginx, traefik can (and usually is) implemented as a micro-service. It is available on GitHub and it can be downloaded (and run) from Docker Hub (https://hub.docker.com). We’ve been eyeing traefik for quite some time now. It has been gaining some serious traction both for personal use and for commercial uses. Consequently, traefik has been in our roadmap as a possible future path.

What We’re Building

Once the news broke concerning the raid on the Moscow offices of Nginx, we decided to build some prototypes using traefik. Like many other groups, we were using nginx. And like many other groups, we wanted to get ahead of the wave and start our own migration to traefik. So over the past few days, we’ve worked with the code and put together a few prototypes for its use.

Our first prototypical implementation is of a home entertainment complex. We mashed together Plex, LetsEncrypt, MariaDB, and a few other containers to build a nifty little home entertainment complex. We build a variation of this using Jellyfin – if you don’t want to support the closed source Plex code base.

While that prototype was fun, we only learned so much from its assembly. So we decided to build a Nextcloud-based document repository. This document server uses traefik, nextcloud, postgresql, redis, and a bitwarden_rs instance. The result is something that we are labeling the LoboStrategies Document repository (i.e., the LSD repo). And yes, we do think that it is trippy.

Bottom Line

Change is fun. And when you are planning the changes, you can mix vision and fun into something marvelous. But sometimes, you are forced to respond to unexpected changes. In our case, we knew that a micro-services application router was needed. And we always knew that traefik could very well be the future code base for some of our products/designs. But when Rambler Group (and the Moscow police) threatened nginx, we were already a few steps ahead. So we simply accelerated some of our plans.

The key takeaway for us was that we had already put together a strategy. So we simply needed to build the tactical plan that implemented our vision. Because we had a long-range strategy, we were able to be far more nimble when the storm clouds came upon us.

OSINT Techniques Are Often Necessary

OSINT sources are legion.

Over the past two quarters, we’ve focused upon the technologies and practices that help to establish (and maintain) an effective privacy posture. We’ve recommended ceasing almost all personal activity on social media. But the work of ensuring personal privacy cannot end there. Our adversaries are numerous – and they counter every defensive action that we take with increasingly devastating offensive tools and techniques. While the tools of data capture are proliferating, so are the tools for data analysis. Using open source intelligence (OSINT) tools, it is possible to transform vast piles of data into meaningful and actionable chunks of information. For this reason, our company has extended its security and privacy focus to include the understanding and the use of OSINT techniques.

Start At the Beginning

For countless generations, a partner was someone that you knew. You met them. You could shake their hand. You could see their smiling face. You knew what they could do. And you probably even knew how they did it. In short, you could develop a trust-based relationship that would be founded upon mutual knowledge and relative proximity. It is no coincidence that our spouses are also known as our ‘partners‘ as we can be honest and forthcoming about our goals and desires with them. We can equitably (and even happily) share the burdens that will help us to achieve our shared goals.

But that kind of relationship is no longer the norm in modern business. Most of our partners (and providers) work with us from behind a phone or within a computer screen. We may know their work product. But we have about as much of a relationship with them as we do with those civil servants who work at the DMV.

So how can we know if we should trust an unknown partner?

A good privacy policy is an essential starting point in any relationship. But before we partner with anyone, we should know exactly how they will use any data that we share with them. So our first rule is simple: before sharing anything, we must ensure the existence of (and adherence to) a good privacy policy. No policy? No partnership. Simple, huh?

That sounds all well and good. But do you realize just how much data you share without your knowledge or explicit consent? If you want to really know the truth, read the end user license agreements (EULA’s) from your providers. What you will usually find is a blanket authorization for them to use any and all data that is provided to them. This certainly includes names, physical addresses, email addresses, birth dates, mothers’ maiden names, and a variety of other data points. If you don’t believe me (or you don’t read the EULA documents which you probably click past), then just use a search engine and enter your name in the search window. There will probably be hundreds of records that pertain to you.

But if you really want to open your eyes, just dig a little deeper to find that every government document pertaining to you is a public record. And all public records are publicly indexed. So every time that you pass a toll and use your electronic pass, your location (and velocity) data is collected. And every time that you use a credit card is logged.

Know the difference between a partner and a provider!

A partner is someone that you trust. A provider is someone that provides something to/for you. Too often, we treat providers as if they were partners. If you don’t believe that, then answer this simple question: Is Facebook a partner in your online universe? Or are they just someone who seeks to use you for their click bait (and revenue)?

A partner is also someone that you know. If you don’t know them, they are not a partner. If you don’t implicitly trust them, then why are you sharing so much of your life with them?

Investigate And Evaluate Every Potential Partner!

If you really need a partner to work with and you don’t already trust someone to do the work, then how do you determine whether someone is worth trusting? I would tell you to use the words of former President Ronald Reagan as a guide: trust but verify. And how do you verify a potential partner? You learn about them. You investigate them. You speak with people that know them. In short, you let their past actions be a guide to how they will make future decisions. And for the casual investigation, you should probably start using OSINT techniques to assess your partner candidates.

What are OSINT techniques?

According to the SecurityTrails blog, “Open source intelligence (OSINT) is information collected from public sources such as those available on the Internet, although the term isn’t strictly limited to the internet, but rather means all publicly available sources.” The key is that OSINT is comprised of readily available intelligence data. So sites like SecurityTrails and Michael Bazzell’s IntelTechniques are fantastic sources for tools and techniques that can collect immense volumes of OSINT data and then reduce it into usable information.

So what is the cost of entry?

OSINT techniques can be used with little to no cost. As a security researcher, you need a reasonable laptop (with sufficient memory) in order to use tools like Maltego. And most of the OSINT tools can run either on Kali Linux or on Buscador (see below). And while some sources of data are free, some of the best sources do require an active subscription to access their data. And the software is almost always open source (and hence readily available). So for a few hundred dollars, you can start doing some pretty sophisticated OSINT investigations.

Protection Against OSINT Investigations

OSINT techniques are amazing – when you use them to conduct an investigation. But they can be positively terrifying when you are the subject of such an investigation. So how can you limit your exposure from potential OSINT investigations?

One of the simplest steps that you can take is to use an operating system designed to protect your privacy. As noted previously, we recommend the use of Linux as a foundation. Further, we recommend using Qubes OS for most of your public ‘surfing’ needs. [We also recommend TAILS on a USB key whenever you are using communal computers.]

Using OSINT To Determine Your Personal Risk

While you can minimize your future exposure to investigations, you first need to determine just how long of a shadow your currently cast. The best means of assessing that shadow is to use OSINT tools and techniques to assess yourself. A simple Google search told me a lot about my career. Of course much of his was easily culled from LinkedIn. But it was nice to see that a simple name search highlighted important (and positive) things that I’ve accomplished.

And then I started to use Maltego to find out about myself. I won’t go into too much detail. But the information that I could easily unearth was altogether startling. For example, I easily found out about past property holdings – and past legal entanglements related to a family member. There was nothing too fancy in my recorded past. While that fact alone was a little discouraging, I was able to find all of these things with little or no effort.

I had hoped that discovering this stuff would be like the efforts which my wife took to unearth our ancestral heritage: difficult and time-consuming. But it wasn’t. I’m sure that it would take some serious digging to find anything that is intentionally hidden. But it takes little or no effort to find out some privileged information. And the keys to unlocking these doors are the simple pieces of data that we so easily share.

Clean Up Your Breadcrumbs

Like the little children in the fairy tale, a trail of breadcrumbs can be followed. So if you want to be immune from casual and superficial searches, then you need to take the information that is casually available and start to clean it up. With each catalogued disclosure, you can contact the data source and request that this data be obscured and not disclosed. With enough diligence, it is possible to clean up the info that you’ve casually strewn in your online wake. And if the task seems altogether too daunting, there are companies (and individuals) who will gladly assist you in your efforts to minimize your online footprints.

Bottom Line

As we use the internet, we invariably drop all sorts of breadcrumbs. And these breadcrumbs can be used for many things. On the innocuous end of the scale, vendors can target you with ads that you don’t want to see. But at the other end of the scale is the opportunity to leverage your past in order to redirect your future. It sounds innocuous when stated like that. So let’s call a spade a spade. There is plenty of information that can be used for kidnapping your data and for “influencing” (i.e., extorting) you. But if you use OSINT techniques to your advantage, then you can identify your risks and you can limit your vulnerabilities. And the good news is that it will only cost you a few shekels – while doing nothing could cost you thousands of shekels.

How To Solve SSD Longevity Challenges

There are lots of reasons to use SSD storage devices. SSD devices are lightning fast. You don’t have to wait for the drive to spin up a thin sheet of metal (or polymer). You don’t have to wait for a drive head to get properly positioned over the right physical location on the drive. [Note: This was a real problem with legacy disk storage – until EMC proved that cache is the best friend that any spinning media could have.] Today, solid state storage is demonstrably smaller than any physical storage media. But in the past few years, SSD longevity has become a serious concern.

Background

Many of us carry a few of these devices with us wherever we go. They are a very durable form of storage. You can drop a 250GB thumb drive (or SSD) into your pocket and be confident that your storage will be unaffected by the motion. If you did the same with a small hard disk, then you might find that data had been lost due to platter and/or R/W head damage.

Similarly, the speed, power, and thermal properties make these devices a fantastic inclusion into any mobile platform – whether it be a mobile phone, a tablet, or even a laptop. In fact, we just added SSD devices to a number of our systems. With these devices, we have exceptionally good multi-boot options at our disposal. For my personal system, I can boot to the laptop’s main drive (to run an Ubuntu 19.04 system) or I can boot to an external, USB-attached SSSD drive where I have Qubes 4.0 installed.

Whether you want fast data transfer speeds, reduced power needs, or a reduced physical footprint, SSD storage is an excellent solution. But it is not without its own drawbacks.

Disadvantages

No good solution comes without a few drawbacks. SSD is no exception. And the two real drawbacks are SSD cost and SSD longevity. The cost problems are real. But they are diminishing over time. As more phones are coming with additional storage (e.g., 128GB – 256GB of solid state storage on recent flagship phones), the chip manufacturers have responded with new fabrication facilities. But even more importantly, there is now substantial supply competition. And increased supply necessarily results in price reductions.

Even more importantly, device construction is becoming less complex. Manufacturers can stuff an enclosure with power, thermal flow control, media, rotational controls (e.g., stepper motors, servos), and an assortment of programmable circuits. Or manufacturers can just put power and circuits into a chip (or chip array). For things like laptops, this design streamlining is allowing vendors to swap spinning platters for additional antenna arrays. The result of this is inevitable. Manufacturing is less complex. Integration costs (and testing costs) are also less. This means that the unit costs of manufacturing are declining.

Taken together increased supply and decreased costs have bent the production function. So SSD is an evolutionary technology that is rapidly displacing spinning media. But there is still one key disadvantage: SSD longevity.

SSD Longevity

In the late eighties, the floppy disk was replaced by optical media. The floppy (or rigid floppy) was replaced by the CD-ROM. In the nineties, the CD-ROM gave way to the DVD-ROM. But in both of these transitions, the successor technology had superior durability and longevity. That is not the case for SSD storage. If you were to treat an EEPROM like a cD-ROM or DVD-ROM, it would probably last for 10+ years. But the cost per write would be immense.

Due to its current costs, no one is using SSD devices for WORM (write once, read many) storage. These devices are just too costly to be written as an analog to tape storage. Instead, SSD’s are being used for re-writable storage. And this is where the real issue arises. As you re-write data (via electrical erasure and new writing), the specific physical location in the chip becomes somewhat unstable. After numerous cycles, this location can become unusable. So manufacturers are now publishing the number of program / erase cycles (i.e. p/e cycles) that their devices are rated to deliver.

But is there a real risk of exhausting the re-write potential of your SSD device? Yes, there is a real risk. But with every new generation of chips, the probability of failure is declining. Nevertheless, probabilities are not your biggest concern. Most CIO’s should be concerned with risk. If you data is critical, then the risk is real – regardless of the probabilities for failure.

Technology Is Not The Answer

Most technologists focus on technology. Most CIO’s focus on cost / benefit or risk / reward. While scientific and engineering advances will decrease the probability of SSD failure, these advances won’t really affect the cost (and risks) associated with an inevitable failure. So the only real solutions are ones to mitigate a failure and to minimize the cost of recovery. When a failure occurs (and it will occur), how will you recover your data?

Bypass The Problem

One of the simplest things that you can do is to limit the use of your SSD devices. That may sound strange. But consider this. When a failure occurs, your system (OS and device drivers) will mark a “sector” as bad and write the data to an alternate location. If such a location exists, then you continue ahead w/o incurring any real impact.

The practical upshot of this is that you should always seek to limit how much data is written to the device in order to ensure that there is ample space for rewriting the data to a known “good” sector. Personally, I’m risk averse. So I usually recommend that you limit SSD usage to ~50% of total space. Some people will recommend ~30%. But I would only recommend this amount of unused space if your SSD device is rated for higher p/e cycles.

Data Backup and Recovery Processes

For most people and most organizations,it takes a lot to recover from a failure. And this is true because most organizations do not have a comprehensive backup and recovery program in place. In case of an SSD failure, you need to have good backups. And you should continue to perform these backups until the cost of making backups exceeds the costs of recovering from a failure.

For a homeowner who has a bunch of Raspberry Pi’s running control systems, the cost of doing backups is minimal. You should have good backups for every specific control system that you operate. For our customers, we recommend that routine backups be conducted for every instance of Home Assistant, OpenHab, and any other control system that the customer operates.

For small businesses, we recommend that backup and recovery services be negotiated into any management contract that you have with technology providers. If you have no such contracts, then you must make sure that your “in-house” IT professionals take the job of backup and recovery very seriously.

Of course, we also recommend that there be appropriate asset management, change management, and configuration management protocols in place. While not necessary in a home, these are essential for any and all businesses.

Bottom Line

SSD devices will be part of your IT arsenal. In fact, they probably already are a part of your portfolio – whether you know it or not. And while SSD devices are becoming less costly and more ubiquitous, they are not the same as HDD technology. Their advantages come at a cost: SSD longevity. SSD devices have a higher probability of failure than do already-established storage technologies. Specifically, they do have a higher probability of failure. Therefore, make sure that you have processes in place to minimize the impact of failures and to minimize the cost of conducting a recovery.

Disintegration and Compartmentalization: Necessary Best Practices

Safety Deposit Boxes in Safe Bank.

Several months ago, I wrote about my never-ending privacy story. Since then, I’ve given numerous presentations about security and personal privacy. In one of those presentations, I talked about how using personal clouds (e.g., Nextcloud) could limit your exposure to those who offer you their “free” services in exchange for your personal data. But there has always been an elephant in the room. Specifically, we want to have a simple and easy desktop experience – myself included. And most people will trade almost anything for that experience. But those carefree times where everything is “free” and everything is “safe” are now disappearing. So to kick my privacy efforts up another notch, I’ve begun the process of online compartmentalization.

As you read that word, many of you might be thinking about the psychological consequences of compartmentalizing your life. And almost every psychologist will tell you that breaking your life down into smaller fragments separated by impenetrable walls can be unhealthy. These self-imposed walls separate your family life from your work life and your faith life. Some people keep all sorts of separate personalities locked up in secure closets. And this can be a terrible burden.

But when it comes to privacy and security, you can no longer afford to keep all of your eggs in one basket. In fact, compartmentalization is now becoming an altogether mandatory part of a “connected” life. You should not let data from your home life be accessible to actors in your work life. And it would be wise to dis-integrate your work life from your home life.

The Technologies of Disintegration

In order to protect the integrity of the various roles in our life, you need to isolate data. But that is increasingly difficult. For example, most businesses ask you to be “on call” twenty-four hours a day, seven days a week. But they don’t want to pay for a separate phone. And they want to ensure that any personal equipment does not exfiltrate company data and/or intellectual property. So most companies reserve the right to access all of your phone’s capabilities (and data) in order to protect any of their data which might be on the phone.

You can easily see the problems with this example. If you are considering alternate employment, it might be unwise to let your current employer have unfettered access your email and instant messages with potential future employers. Fortunately, there are technologies that can help you build the walls that you might want (or need). These include: virtualization, containers, and secure cloud services.

Step One: Use Application Virtualization

We are victims of a culture that shares way too much information. For many of us, we willingly share data with companies that we shouldn’t trust. We do this so that we can share even more personal data with friends who really aren’t our friends.

And we count upon our applications to enable this kind of sharing. We unconsciously (and indiscriminately) copy and paste data between apps. Of course, this allows bad actors to exploit data sharing as a channel for data exfiltration or data corruption.

But if we want to protect ourselves, we need to erect barriers between apps. And the latest means of erecting such barriers is to exploit containers. Whether we use snap or flatpak, we are adding an execution layer that seeks to impose barriers. And the same thing is rue for the other darling of micro-services: Docker. Like the app management tools provided by Linux distro teams, the folks at Docker are trying to standardize application execution and enable application isolation.

Among other activities this summer, I’ve invested quite a bit of personal time into Docker, docker-compose, and a variety of support apps. And I now use Docker for Plex, Let’s Encrypt, most web servers (and proxies), the TICK stack (i.e., Telegraf, InfluxDB, Chronograf, and Kapacitor), and a variety of home automation applications.

Step Two: Use A Secure OS

Nevertheless, sometimes, you need more than just a good application manager. In order to effectively use compartmentalization as a defense, you need to get onto a more secure OS. Most security experts will tell you that there are many platforms that are intrinsically more secure than Windows. Yes, you can harden Windows. I know. I’ve done it for myself and for other. At the same time, you need to use a platform that is not built by someone who makes money off of your identity (e.g., Apple).

Earlier this summer, I finally switched to a Linux-only infrastructure. All of my Windows servers are gone. And all of my Windows desktops are now Linux desktops. I have rooted all of the phones that I can and replaced their OS with one that is no longer dependent upon Google services.

Step Three: Use System Virtualization

While you may run your apps in virtual environments and/or containers, you probably need more compartmentalization. Yes, you should isolate your apps. But you also need to isolate systems from one another. Indeed, there are times when you need more than just a secure app. You need a secure stack.

Over the past few months, I’ve started using virtual machines to isolate applications that are accessible from the Internet. I do this so that I can minimize the damage that can be done from any single app to the OS that it runs upon. By adding system isolation in addition to app isolation, I have increased the security and availability of my customer applications.

Step Four: Use The Most Secure Platform That You Can Afford

All of us can be more secure. But for some of us, the cost of maximum security must be paid – either in coin of the realm or in tokens of inconvenience. For me, my most important resource is my time. So I carefully choose each and every experiment that I undertake. And this past weekend, I finally chose to take the leap – and I finally added Qubes OS 4.0 to my core laptop.

The process of moving to Qubes was frustrating. I had just reclaimed a 500GB external SSD drive. And it took about four (4) hours to get Qubes installed. It’s really not that hard. But special partitioning and formatting was required in order to write to the drive. In the end, I had to write the boot image onto a raw partition on a thumb drive. I then had to update grub on my internal drive so that I could multi-boot. Finally, I re-partitioned the SSD drive and finally wrote Qubes to the external drive. After completing the installation, I can now boot to either my internal Ubuntu 19.04 system or to my Qubes OS 4.0 system.

Step Five: Consciously Choose Your Threshold of Inconvenience

I must now learn how to use my “reasonably secure OS” to perform my day-to-day activities. Last night, I spent a few hours setting up my entertainment / streaming apps. [Note: Yes, they are important. I really do like to listen to music as I write.] And for what it’s worth, I am now writing this post from my Qubes OS system. It took some time to set up NoScript properly. But once I did that, I’ve had little problems with this blog post. And earlier this morning.

Alright, that’s not altogether true. The simple process of sharing files between processes is a tad more complex. For example, when taking a screenshot of the entire desktop, the file is stored in the dom0 (i.e.,master domain) file system. So I had to learn how to copy files to/from dom0. But once I figured that out, I realized that the process isn’t nearly as hard as it had originally seemed.

Takeaways

I’ve finally addressed some structural insecurities in how I use my computers – both at work and at home.

  • We moved to a Linux-based system.
  • The team migrated to containers both for casual (desktop) apps and for more service-oriented applications.
  • Our IT team moved key services onto virtual machines that could be isolated from less disciplined processes.
  • Finally, I converted my primary laptop to an even more secure OS (i.e., QubesOS) – one that features compartmentalization and maximum isolation).

Do you need to do all of these things? I won’t answer that for you. But as for myself, I needed to become more secure. So I took those steps that I needed to take in order to become safer and to secure my private life from public scrutiny.

Is Transitive Trust A Worthwhile Gamble?

When I started to manage Windows systems, it was important to understand the definition of ‘transitive trust’. For those not familiar with the technical term, here is the ‘classic’ definition:

Transitive trust is a two-way relationship automatically created between parent and child domains in a Microsoft Active Directory forest. When a new domain is created, it shares resources with its parent domain by default, enabling an authenticated user to access resources in both the child and parent.

But this dry definition misses the real point. A transitive trust relationship (of any kind) is a relationship where you trust some ‘third-party’ because someone that you do trust also trusts that same ‘third-party’. This definition is also rather dry. But let’s look at an example. My customers (hopefully) trust me. And if they trust me enough, then they also trust my choices concerning other groups that help me to deliver my services to them. In short, they transitively trust my provider network because they trust me.

That all sounds fine. But what happens if your suppliers break your trust? Should your customers stop trusting you? Recently, this very situation occurred between Capital One, their customers, and some third-party technology providers (like Amazon and their AWS platform).

Trust: Hard to Earn – Easy to Lose

Unfortunately, the Amazon AWS technology platform was compromised. So Capital One should legitimately stop trusting Amazon (and its AWS platform). This should remain true until Amazon verifiably addresses the fundamental causes of this disastrous breach. But what should Capital One’s customers do? [Note: I must disclose that I am a Capital One customer. Therefore, I may be one of their disgruntled customers.]

Most people will blame Capital One. Some will blame them for a lack of technical competence. And that is reasonable as Capital One is reaping financial benefits from their customers and from their supplier network. Many other people will blame the hacker(s). It’s hard not to fume when you realize that base individuals are willing to take advantage of you solely for their own benefit. Unfortunately, only a few people will realize that the problem is far more vexing.

Fundamentally, Capital One trusted a third-party to deliver services that are intrinsic to their core business. Specifically, Capital One offered a trust relationship to their customers. And their customers accepted that offer. Then Capital One chose to use an external platform simply to cut corners and/or deliver features that they were unable to deliver on their own. And apparently that third-party was less capable than Capital One assumed.

Regaining Trust

When a friend or colleague breaks your trust, you are wounded. And in addition to this emotional response, you probably take stock of continuing that relationship. You undoubtedly perform and internal risk/reward calculation. And then you add the emotional element about whether this person would act in a more trustworthy fashion in the future. If our relationship with companies was less intimate, then most people would simply jettison their failed provider. But since we build relationships on a more personal footing, most people will want to give their friend (or their friendly neighborhood Bailey Building & Loan) the benefit of the doubt.

So what should Capital One do? First, they must accept responsibility for their error in judgment. Second, they must pay for the damages that they have caused. [Note: Behind the scenes, they must bring the hammer to their supplier.] Third, they must rigorously assess what really led to these problems. And fourth, they must take positive (and irreversible) steps to resolve the root cause of this matter.

Of course, the last piece is the hardest. Oftentimes, the root cause is difficult to sort out given all of the silt that was stirred upon in the delta when the hurricane passed through. Some people will blame the Capital One culture. And there is merit to this charge. After all, the company did trust others to protect the assets of their customers. As a bank, the fundamental job is to protect customer assets. And only when that is done, should the bank owners use the entrusted funds in order to generate a shared profit for their owners (i.e., shareholders) and their customers.

Trust – But Verify

In the height of the Cold War, President Ronald Reagan exhorted the nation to trust – but then to verify the claims of a long-standing adversary. In the case of Capital One, we should do the very same thing. We should trust them to act in their own selfish interests because the achievement of our interests will be the only way that they can achieve their own interests.

That means that we must be part of a robust and two-way dialog with Capital One and their leadership. Will Capital One be big enough to do this? That’s hard to say. But if they don’t, they will never be able to buy back our trust.

Finally, we have to be bold enough to seek verification. As President Reagan said, “You can’t just say ‘trust me’. Trust must be earned.”

Wire-to-Wire Technology Adoption Isn’t The Only Option

The winner surges at the right time.
The Winner Surges At The Right Time

The annual “Run For The Roses” horse race has been held since 1875. In that time, there have been only 22 wire-to-wire race leaders/winners. Indeed, simple statistics favor the jockey and horse who can seize the lead at the right moment. For the strongest horses, that may be the start. But for most horses, the jockey will launch his steed forward when it best suits the horse and its racing profile. This simple (and complicated) approach is also true for technology adoption.

Docker And Early Technology Adoption

Five years ago, Docker exploded onto the IT scene. Originally, Docker was being adopted exclusively by tech savvy companies. And some of these early adopters have taken keen advantage of their foresight. But like horses that leap too soon, many companies have already flashed into existence – and then been extinguished by an inability to deliver on their promised value.

Docker adoption has moved from large enterprises to the boutique service industry.

Now that Docker is over five years old, how many companies are adopting it? According to some surveys, Docker use in the marketplace is substantially over 25%. And I would be willing to bet that if you include businesses playing with Docker, then the number is probably more than 40%. But when you consider that five years is a normal budget/planning horizon, then you must expect that this number will do nothing but increase in the next few years. One thing is certain, the majority of applications in use within businesses are not yet containerized.

So there is still time to take advantage of Docker (and containers). If you haven’t yet jumped on board, then it’s time to get into the water. And if you are already invested in containers, then it’s time to double-down and accelerate your investment. [By the way, this is the “stay competitive” strategy. The only way to truly leap ahead is to use Docker (and other containers) in unique and innovative ways.]

Technology Adoption At Home

Adoption of containers at home is still nascent. Yes, there have been a few notable exceptions. Specifically, one of the very best uses of containers is the Hass.io infrastructure that can be used to host Home Assistant on a Raspberry Pi. Now that the new Raspberry Pi 4 is generally available, it is incredibly simple (and adorably cheap) to learn about – and economically exploit – containers at home.

My Personal Experiences
Containers can and should be used at home.

I’ve been using Docker at home for over a year. And now that I’ve switched all of my traditional computing platforms from Windows to Linux, I’m now using Docker (and other containers) for nearly all of my personal and professional platforms. And this past week, I finally cut over to Docker for all of my web-based systems. My new Docker infrastructure includes numerous images (and containers). I have a few containers for data collection (e.g., glances, portainer). I have moved my personal entertainment systems to Docker containers (e.g. plex, tautulli). And I’ve put some infrastructure in place for future containers (e.g., traefik, watchtower). And in the upcoming months, I’ll be moving my entire TICK stack into Docker containers.

Bottom Line

If you haven’t started to exploit containers, then it’s not too late. Your employer is probably using containers already. But they will be moving even more swiftly towards widespread adoption. Therefore, it’s not too late to jump onto the commercial bandwagon. And if you want to do something really new and innovative, I’d suggest using containers at home. You’ll learn a lot. And you may find a niche where you could build a highly scalable appliance that exploits highly containerized building blocks.

Nextcloud: Methadone For Windows Addiction?

Breaking Your Windows Addiction

Last year, we cut the cord on cable TV at our household. We’ve been using streaming services since then. And when I eliminated my dependence upon a local cable provider, a whole world of networking options opened up before our eyes. This week, I cut the cord on my personal Windows addiction. I have a feeling that I will now have access to a whole new world of computing options.

A Rocky Road

This transition has been a very hard thing for me to do. I grew up with Windows. Yes, I used CP/M on my first desktop computer. But the first business OS that I used was Windows 386 (v2.10). I did try OS/2 in the early nineties. And I have sporadically used Linux since the late nineties. But I have consistently used Windows as my “daily driver” since the early nineties.

Throughout this time, I have used all sorts of supporting services. I used Unix servers for my web services infrastructure. I used Windows servers for my business file, print, and messaging services. And for the past ten years, I’ve used Android for my mobile services. But all of these services were in support of my Windows addiction. All of my documents were accessible from my desktop. My key applications were Windows applications. And my “comfort zone” was decidedly Windows-centric.

But over the past decade, more and more of my services have been migrating either to my phone, to private cloud hardware, or to the public cloud. New capabilities in my house have all been based upon Linux as their operating platform. My storage systems are all Linux-based. My lighting systems use embedded services controlled by Linux servers/services. And almost all of my applications are now web-based. I still have a suite of desktop productivity tools (e.g., LibreOffice). But there are precious few of these kinds of apps left. Indeed, all of my mission-critical functions are now web-based.

The Rise of the Cloud

The migration to a web-based architecture has been underway since the mid-to-late nineties. But the migration to public and private clouds has only been underway for a little over a decade. I have been using public cloud services since 2005 (i.e., since the invite-only days of GMail). I also used Google Drive and Google Play Music as soon as they became available (in beta form). I’ve also used Dropbox, Spotify, and a large variety of other cloud services. Indeed, web services and cloud hosting is now ubiquitous.

But the real transformation has come with the exploitation of private cloud technologies. In our office (and at our home), we have needed file and print services. And we ran private file storage services principally hosted on private SAN devices. We also run IoT management services on a private cloud running in our facility. [Note: This private cloud is actually a hybrid cloud as it works with a public cloud for premise access).]

So with the complete migration to web services (and the nearly complete migration to cloud services), I only needed an excuse in order to address my Windows addiction.

Privacy Threats Have Forced Our Reconsideration

Over the past year, I have become even more keenly interested in the privacy (and security) of our computing platforms. I have implemented 2FA across the board. We have replaced stock router firmware with customized implementations. We’ve successfully installed (and configured) VPN concentrators for our premises. I’ve even implemented a comprehensive password manager.

But despite all of these changes (and a slew of other changes not highlighted here), I had not re-evaluated all of the service platform that I use; I thought that I had too much investment in Windows. After all, I had Windows file servers. And I had some Windows apps that were mission-critical.

But now that all of these apps (or comparable substitute apps) exist on Linux platforms, the barriers have diminished. And even more importantly, the implementation of Nextcloud as a private cloud has almost erased any client-based need for Windows. Consequently, I finally pulled the trigger on eliminating my personal addiction to Windows.

Why Nextcloud?

For those unfamiliar with Nextcloud, let me provide a little background. Nextcloud is an open-source product suite that provides services which were originally provided by Windows servers. Over the past two decades, the open source community has provided specific “point” solutions for some of these services (e.g., Samba file services, OnlyOffice and Collabora productivity tools, etc). But Nextcloud (which is a descendant of Owncloud) provides almost every services originally provided by a Windows server.

And what is the advantage of Nextcloud? That’s simple. Nextcloud is open source. So there is no server software licensing cost. More importantly, there are no client software licensing costs. [Note: You can buy “supported” versions of Nextcloud (like Univention UCS) if you need support options.] For this reason, we implemented Nextcloud so that we no longer needed to host file services on Windows servers.

Our Nextcloud implementation is fairly simple. We have one instance of the product running on an Ubuntu/Debian VM guest. [Note: Today, that guest is running on a Windows host. But that is the very last change we will make in order to completely eliminate our Windows server addiction.]

Client Independence

Once our Windows server dependence was cured, we began the client migration. My systems were the first to migrate. I am now this article using an Ubuntu 19.04 system that is running on my HP Pavilion laptop. While there are a couple of minor issues (like managing multiple display screens), I am able to do all of my work on this system – especially when coupled with our Nextcloud system.

Much of this could have been done a decade ago. But the real enabler for this transition has been the elimination of our dependence upon the Win32 API AND the elimination of our Windows server dependence.

Bottom Line

After nearly two-and-a-half decades of dependence, I am now clean and sober. My Windows addiction has been vanquished. I may still use Windows – as dictated by specific business requirements. But my default use of Microsoft Windows technologies is now at an end. As someone who compiled his first Linux desktop back in 1997, this transition is long overdue – and eminently satisfying.

IT Is Supposed To Work, Right?

IT Doesn't Have To Be A Horror Show

For many companies, the IT department is not the department of solutions. Rather, it is the department that misses deadlines, omits requirements, and frustrates the people who really make money within a company. Truth be told, most companies aren’t in the business of IT. Most companies make money either because they design products, they build products, they deliver products, or they support products. And these products are (hopefully) desired by their customers. IT exists in order to facilitate the real business of a company.

So why does IT fail so often? Or stated differently, why does IT believe it is being misrepresented in the board room?

These questions are important. In many companies, IT has adopted an air of superiority. IT leaders have sought to “rebuild” companies based upon what they believe is the “best” corporate strategy. Sometimes, the IT leaders are right. Most times, the IT leaders are deceiving themselves and possibly defrauding their stakeholders.

What Is The Purpose Of IT?

While IT can be a means of generating unique value within some companies, everyone must admit that successful IT teams take part in the routine operation of every business. IT is used for accounting and finance. IT is used for sales and marketing. IT is used for product design and product testing. IT is used for manufacturing and shipping. But in many ways, IT is now like real estate or office supplies. Every company has to have IT (and the tools and capabilities that IT delivers) if only to perform the uninspiring parts of routine operations.

This is very reminiscent of many other key technology waves throughout our history. In particular, I am reminded of the effects created by the introduction of the printing press and the introduction of double-entry bookkeeping. Both of these technologies were a means of enhancing (and accelerating) work that was already being done. In the case of movable type, the printing press replaced the people who were hand-writing scrolls and books. In the case of double-entry bookkeeping, accounts and ledgers augmented the role of simple storage vaults. Both of these technologies introduced remarkable transformations in society. And like these technologies, IT has been the source of remarkable changes.

Like the aforementioned technologies, the transformational capability of IT is highlighted in how IT has replaced burdensome, tedious, and dis-spiriting office drudgery. IT has released office workers from the burdens of the mundane so that they can focus upon the creative and inspiring work that is before them.

When IT Works

IT works when its advantages are almost seamless. In a very real sense, you know that IT is working best when it is taken for granted the most. While IT wants to be part of the “main event”, the most successful projects are often the ones where success is achieved when IT is like the invisible hand of Adam Smith.

The Commoditization of IT Services

When I began my career, IT was the place where the best and brightest minds worked insane hours in order to deliver the ‘next big thing’. This was true for the PC. It was true for operating systems (e.g., CP/M, Windows, OS/2, etc). It was also true for “custom-built” corporate services. In the nineties, corporations spent millions of dollars building customized ‘clones’ of ERP and CRM systems.

Today, all of that has changed. Hardware is a commodity. And software is now the ‘table stakes’ for hardware vendors – and service providers. Because of the “free software” movement of the eighties, the core of almost all systems now contains free/open components. At the same time, customers now believe that they should get both hardware and software for free. Most are willing to trade their birthright (i.e., privacy and independence) for a subscription fee.

Commodity Markets Are Challenging

If you accept the premise that IT products are now commodities, then there are a few economic consequences. Commodity markets usually have a low cost of entry. That means that there are (and will be) many competitors in any given market.

  • This is true for computer hardware. You can get great hardware for a very low price. You can get hardware from Chinese companies, from Korean companies, or from a host of other “offshore” suppliers. Even the United States still has some “onshore” fabs (e.g., Intel). But the majority of fabs are overseas. And they produce economically compelling components. It is fascinating that while chip/system designers can be found anywhere in the world, most fabs are in Asia. Bottom line: Fabs are expensive. So they succeed only through economies-of-scale. Until new computing technologies emerge (e.g., quantum computers), it will be very difficult to defeat offshore fabs that are funded by national governments.
  • It is also true for IT manpower needs. Today, you can get software services from India, China, southeast Asia, eastern Europe, and even South America. With millions of programmers worldwide, the job of writing mundane software is no longer differentiating. Bottom line: The labor winners will be the people and organizations that can take their margin from software laborers. Alternatively, it is still possible to specialize within specific industries (e.g., healthcare, aerospace, etc).
Making IT Matter – Again

Cheap hardware, cheap labor, and free software are making IT less specialized – especially at the component level. It is no longer possible to simply be a good analyst, a good programmer, or a good operator. Successful IT teams must be able to build comprehensive solutions from all of the available parts. Like residential architects, the successful IT leaders will know what is available in the market. They will know how to integrate standardized components into a working solution. They will know how to operate that solution in order to maximize the economic impact (of the solution) upon the business. In short, they will work with the owner to meet their functional desires. They will select the standardized components needed to meet the economic objectives of the owner. And they will know where to get the best labor to do the assembly, testing, and implementation of the solution.

Can someone make money being a specialist? Yes. But most specialists must be undisputed experts. Or you can make money by achieving economies-of-scale and operating with razor-thin margins. If you can do either, then you can make remarkable sums of money. If you can’t, then systems integration and solutions architecture may be your next best avenue for success.