Maintaining Technology Currency (and Relevance)

Grasp the Future
Grasp the Future

A few months ago, I wrote an article about mobile privacy. In that article, I wrote about how every “off-the-shelf” mobile platform MUST be modified in order to ensure some modicum of privacy. I expanded upon this thought when I recently presented to the Fox Valley Computer Professionals. [A version of that presentation can be found over at SlideShare.] One of the most important themes from the presentation actually arose during the obligatory Q&A session. [By the way, the Q&A time is always the most important part of any presentation.] From this Q&A time, I realized that the single most important takeaway was the necessity of maintaining technology currency.

From a security perspective, it is essential to remain current on all elements of your infrastructure. One of the most exploited vectors in any organization is the rampant inattention to software maintenance. It only takes one zero-day exploit to compromise a meticulously maintained system. And for those organizations that do not remain current on their software, they are opening up their systems (and their customers) to external exploitation. A decade ago, PC World highlighted the risks of operating with un-patched systems. While the numbers may have changed since that article, the fundamental lesson is still the same: technology currency is one of the most under-recognized means of hardening your systems.

The Human Factor

But technology currency is not just a matter of ensuring the continuing usability of our technology investments. It is also an important matter for ensuring the sustaining value of the people within our teams. I have been involved in IT for several decades. In that time, I’ve seen many waves of change. In that time, I’ve seen mainframes became Unix Systems. Windows desktops became Windows servers. All applications servers (regardless of their OS) became web servers. And now these same “n-tier” servers have become virtual systems that are now running on “cloud” platforms.

But with each wave of technology that emerged, crested, and then subsided, you will probably find a whole group of technology specialists who are now displaced. Fortunately, most technologists are flexible. So if they didn’t stay working on legacy systems, then they have willingly (or unwillingly) embraced the next technology wave.

Redrawing the Boundaries of Trust

Like many technologists, I have been forced into career acrobatics with each new wave of technology. And I have complicated these transitions by switching between a variety of IT disciplines (e.g., application development, information security, capacity and performance management, configuration and change management, and IT operations). So it was not a surprise when I realized that information privacy changes were driving similar changes – for the industry and for myself.

For almost two decades, I’ve been telling people that they needed to shift to hosted (cloud) platforms. Of course, this shift meant entering into trust relationships with external service providers. But for the last four or five years, my recommendations have begun to change. I still advocate using managed service platforms. But when privacy and competitive advantages are at stake, it may be necessary to redraw the trust boundaries.

A decade ago, everyone trusted Google and Facebook to be good partners. Today, we view both of them (and many others) as self-interested members of an overly complex supply chain. So today, I am recommending that every company (and even most individuals) revisit the trust boundaries that they have with every part of their supply chain.

Moving Personal Fences

We have decided to redraw trust boundaries in dramatic ways. First, we have decided to forego the advantages of partnering with both Facebook and Google. This was simple when it came to Facebook. Yes, not being on Facebook is hard. But it is eminently achievable. To that end, I am celebrating my one year divorce from Mark & Co. But redrawing the boundaries with Google have been much harder.

Getting rid of Google has meant moving to new email services. [Note: This also meant abandoning builtin contact address books and calendaring. It has also meant discontinuing the use of Google Apps. And from a personal level, it has meant some dramatic changes for my mobile computing platform.

Bottom Line: Moving off of the Google cloud has required the construction of an an entirely new cloud platform to replace the capabilities of Google Drive/Cloud.

Nextcloud Replaces Google Cloud

We needed a platform to provide the following functions:

  1. Accessible and extensible cloud storage for both local and remote/mobile users.
  2. An integrated Contact database.
  3. An integrated Calendar database.
  4. An integrated Task database.
  5. A means of supporting WebDAV and CalDAV to access the aforementioned items.

Of course, there is also a whole group of “nice-to-have” features, including:

  • Phone/location tracking,
  • Mobile document scanning (and OCR),
  • Two-factor authentication

After considerable review, we decided to use Nextcloud. It provided all of the mandatory features that we required as well as all of the “nice-to-have” features. We further decided to minimize our security exposure by running this service from within a VPS running onsite (though offsite would have worked as well).

Outcomes

It took several days to secure the hardware, setup the virtual infrastructure, install Nextcloud, and configure it for local and mobile access. Currently, we’re using a Nextcloud virtual “appliance” as the base for our office cloud. From this foundation, we extended the basic appliance to meet capacity and security needs. We also installed ONLY OFFICE as an alternative to both local and cloud-based Microsoft Office products.

At this very moment, we are now decoupling our phones and our systems from the Google cloud infrastructure. And as noted before, we’ve already changed our DNS infrastructure from ISP/Google to our own systems. So we are well on our way to minimize the threat surface associated with Google services.

Of course, there is more work to do. We need to further ruggedize our services to ensure higher availability. But our dependence upon Google has been drastically reduced. And the data that Google collects from us is also reduced. Now we just have to get rid of all of the data that Google has collected from us over the past fifteen (15) years.

The Digital Economy Class

Economy Class...With A Little Quality
Economy Class…With Quality Touches

One of the most important things that you do as a consultant is the marketing of your expertise. You have to build a brand that screams, “I AM AN EXPERT”. At the same time, you need a brand that also proclaims, “I am savvy, suave, and not at all desperate for business opportunities.” I usually favor one of these two messages. I need to embrace both of them. In the past few weeks, I have taken a number steps that will amplify both messages. By assembling both spare and specialty parts in innovative ways, I hope to conduct my business in the “digital economy class”.

What do I mean when I say, “digital economy class”? It is exactly what it sounds like. When the first airlines offered transportation services, air flight was novel – and it was expensive. So the airlines offered “first class” accommodations: large (and comfortable) seats, ample storage, and fine dining. But as airlines re-focused upon mass transit goals (exploiting economies-of-scale to reach a larger markets), seats became smaller – and food became paltry – and sometimes non-existent. Today, you can get inexpensive transportation in a no-frills kind of way.

Digital Parallels

The same thing has happened in the digital economy. consulting services have seen the same transformation as airlines. In the eighties and nineties, huge accounting firms provided “first class” services – at a steep prices. And shareholders saw this by the IT department’s impressive share of corporate margins. Yes, you can still find big consultancies. But today’s service landscape now includes budget (commodity) services offered by offshore consultancies. And you can find specialty firms that offer the “first class” experience – but their services are limited to a specific technological niche. For example, there are countless consulting groups that specialize in security or networking or web site / content development. And there are just as many consulting firms that specialize in specific industries.

The final result is the same. As airlines reached out to a larger market, they needed to achieve economies-of-scale in order to maintain shareholder profits. In the same way, digital consulting firms must address digital efficiencies. But in the digital market, there is a very limited cost of entry. You don’t have to spend millions of dollars (or hours) to get into the business and compete. You just need to define your scope and focus on building a team that will deliver premier services to your targeted market.

Proper Scoping Is Essential

I have grand plans for my company. My eyesight may be poor – but my vision is unrestricted. Consequently, I wanted a team that could focus upon anything and everything. I may realize that dream – at some point. But for now, I have to narrow my scope to something more achievable. But where should I focus?

  • I can do application and web site development. So can millions of other people. And my poor eyesight does limit my ability to deliver stunning visuals. Why? Simple. I can’t appreciate visual distinctiveness as well as others.
  • I can sling code. But the nineties taught me that other people would willingly sling code for far less compensation. Yes, I can learn any programming language. I’d even put my adaptability ahead of most folks. But like a renaissance man, my breadth of knowledge and my aptitude at learning new things does not always serve me well. When someone is looking for a house painter, they don’t need the cool flourishes that I can develop just for them. Some people just want paint on a slab.
  • I can architect complex systems. Yes, hundreds of other people can do the same thing. But there are fewer competitors in this space. And if I can further narrow my scope to specific domains, I can stand out even further.
Our Current Scope

So what is our current scope / focus? For today, we are focusing upon strategic services – in the small and medium-sized business market. We can’t compete with the mega-consultancies or the offshore budget services. But we know what we are doing when it comes to key services:

  1. Security architectures
  2. Infrastructure architecture and design services
  3. Business architectures
  4. IT Governance
  5. ITSM / ITL Operations Excellence
  6. IT Collaboration
Building A “Digital Economy Class”

Can we build the digital equivalent of “economy class” services? Sure. But so can others. The toughest part of this is setting the right scope so that we can maintain “first class” attributes in a commodity-oriented market segment. And the way that we deliver this is through our team. Our team wants to offer “white glove” service at an affordable price. We won’t skimp on quality. And our team will innovate wherever possible. We don’t want to deliver the most expensive service. But our team does want to deliver the most affordable “first class” experience. We will get you where you want to go. That is our first priority. But we will make sure that you have enough leg room so that we don’t cut off the circulation to your toes.

But how can you do all of this?

You have to save money some place. So we believe in helping you to reclaim the value from past investments. We want to help you to ‘recycle’ (and redeploy) your technology assets. You’ve already spent thousands (or millions) of dollars for point solutions. And you have a lot of technological assets that can be more fully utilized and/or repurposed.

For some companies, the journey is simple. You can extend the number of years that you keep assets (assuming that you have purchased them). So whenever you buy new infrastructure for a new project, you can cascade established (and lower priority) applications to your more mature infrastructure platforms. There are risks to accept when you do this. But with every new generation of IT technology, those risks are diminishing.

You can also increase systems utilization for existing systems – especially hosting platforms. Because technology efficiency has increased, you can now run your systems at higher utilization levels than you did in the past. So some companies can leverage what economists call economies-of-scale.

Finally, you can re-purpose existing assets that are not currently used. You may not have any such assets. But we have found that many companies have not focused upon their asset inventory and asset disposal processes. Indeed, most companies have surplus (older) assets that can be used for lower priority tasks. Of course, this assumes that you have both a 1-n priority list of your application/system assets and a 1-n list of your technology assets. If you have done just such a asset prioritization, then we can help you to create service tiers and allocate systems to appropriate platforms. If you haven’t categorized your assets into 1-n lists, then we can also help you to do that.

A Simple Example

We have the same technology needs that every other company has. But as a small and nimble company, we have to wring every bit of value out of every asset. So when I started to do the speaking (i.e., Meetup) circuit in the Chicagolands, I needed to have additional capabilities. Specifically, I needed a good presentation platform. That meant having a good laptop, a good traveling network infrastructure, and a good projector.

In most small and medium-sized businesses, I only have to bring my laptop. The network and the projector are usually provided by the host. And this is the same for large venues; there is almost always network infrastructure and a projector. But this is not the case for one of my upcoming presentations.

The venue for this presentation will be a room in a local pub. My host is graciously providing a projector. And the venue may or may not have wireless networking. But since my presentation is about security and privacy, I really didn’t want to run my presentation across an ill-secured public hotspot. So I decided that I wanted to bring my own network to the venue.

This could have been done in one of several ways.
  • Purchase a new router with an embedded mobile network interface: While working for major corporations, I would have just requisitioned what I needed. But I no longer have an infrastructure budget of my own. So every purchase must be connected to a real revenue opportunity. And many of my presentations are now for lead generation. So this option was a non-starter.
  • Re-purpose an old wireless router (that is in our office inventory: Since the router is big, it might have been quite impressive. It’s not a rack-mounted device. But it is bulky. And it doesn’t have builtin mobile network access.
  • Purchase a mobile hotspot. And I may do that at some point. But since this is a meetup function, it made no sense to acquire new corporate or personal funds for a network device, an access contract, or a pre-paid SIM.
  • Build our own mobile hotspot: I consulted our asset inventory and found an unused mobile phone. It was an old Nexus 6p that I had used while working for a carrier. It is an existing asset. And we do have unlimited data plans. So it would be possible to temporarily move an existing SIM to this device. Since this solution meant zero incremental capital or expense investments, I decided to pursue this option. After all, I could always purchase a solution after I tried this option.
The Pixel Experience

I had two phones that I could use. One is my Samsung Galaxy S8+. The other is a Nexus 6p (from Huawei). I needed one phone to be the hotspot and the other phone would then be the device that I would display on the projector. It would have been nice to do both on the same phone. Unfortunately, the builtin hotspot capability turns off the builtin screen casting. Could I have paid for a presentation app that would have done this? I probably could have. But I wanted a stock experience as much as possible. Since all of my mobile phone privacy apps are all on my “daily driver” (the Samsung S8+), I decided to use the Nexus 6p as the hotspot platform.

I took a few hours and I rebuilt the Nexus 6p. Specifically, I decided to upgrade the phone to a build that would support Android Pie (i.e., Android 9.x). I did this to ensure that I would get the latest hotspot software from the Android team. Then I swapped SIM cards so that I could use my T-Mobile unlimited data on my “presentation” hotspot. Apart from a few hiccups that I encountered while unlocking the bootloader, the process was remarkably simple. When I was done, I had a shiny new Nexus 6p running the Pixel Experience ROM (featuring Android Pie).

Bottom Line

When you need “economy class” services, you can still find distinctiveness. A good company provides economic value to its customers while not sacrificing the personal touches. We did this for ourselves when we leveraged existing gear in innovative ways. You can do this for yourself by selecting technology experts who share your desire to provide high quality to your customers while leveraging the best value at hand.

Mobile Privacy Demands Some Sacrifices

Managing mobile privacy is complex
Managing Mobile Privacy

As noted previously, the effort to maintain anonymity while using the Internet is a never-ending struggle. We have been quite diligent about hardening our desktop and laptop systems. This included a browser change, the addition of several browser add-ons, the implementation of a privacy-focused DNS infrastructure, and the routine use of a VPN infrastructure. But while we focused upon the privacy of our static assets, our mobile privacy was still under siege.

Yes, we had done a couple of routine things (e.g., browser changes, add-one, and use of our new DNS infrastructure). But we had not yet spent any focused time upon improving the mobile privacy of our handheld assets. So we have just finished spending a few days addressing quite a few items. We hope that these efforts will help to assure enhanced mobile privacy.

Our Mobile Privacy Goals

Before outlining the key items that we accomplished, it is important to highlight our key goals:

  1. Start fresh. It would be nearly impossible to retrofit a hardened template onto an existing base – especially if you use a BYOD strategy. That’s because the factory images for most phones are designed to leverage existing tools – most of which exact an enormous price in terms of their privacy concessions.
  2. Decide whether or not you wish to utilize open source tools (that have been reviewed) or trust the vendor of the applications which you will use. Yes, this is the Apple iOS v. Android issue. And it is a real decision. If it were just about cost, you would always
  3. Accept the truth that becoming more private (and more anonymous) will require breaking the link to most Google tools. Few of us realize just how much data each and every mobile app collects. And on Android phones, this “tax” is quite high. For Apple phones, the Google “tax” is not as high. But that “good news” is offset by the “bad news” that Apple retains exclusive rights to most of its source code. Yes, the current CEO has promised to be good. [Note: But so did the original Google leaders. And as of today, Google has abandoned its promise to “do no evil”.] But what happens when Mr. Tim Cook leaves?
  4. Act on the truth of the preceding paragraph. That means exchanging Google Apps for apps that are more open and more privacy-focused. If you want to understand just how much risk you are accepting when using a stock Android phone, just install Exodus Privacy and see what your current apps can do. The terrifying truth is that we almost always click the “Allow” button when apps are installed. You must break that habit. And you must evaluate the merits of every permission request. Remember, the power to decide your apps is one of the greatest powers that you have. So don’t take it lightly.
  5. Be aware that Google is not the only company that wishes to use you (and your data) to add profits to their bottom line. Facebook does it. Amazon does it. Apple does it. Even Netflix does it. In fact, almost everyone does it. Can you avoid being exploited by unfeeling corporate masters? Sure, if you don’t use the Internet. But since that is unlikely, you should be aware that you are the most important product that most tech companies sell. And you must take steps to minimize your exploitation risk.
  6. If and where possible, we will host services on our own rather than rely upon unscrupulous vendors. Like most executives, I have tremendous respect for our partner providers. But not every company that we work with is a partner. Some are just vendors. And vendors are the ones who will either exploit your data or take no special interest in protecting your data. On the other hand, no one knows your business better than you do. And no one cares about your business as much as you do. So wherever possible, trust you own teams – or your valued (and trusted) partners.
Our Plan of Attack

With these principles in mind, here is our list of what we’ve done since last week:

    Update OS software for mobile devices
        Factory reset of all mobile devices
        SIM PIN
        Minimum 16-character device PIN
    Browser: Firefox & TOR Browser
    Search Providers: DuckDuckGo
    Browser Add-ons
        Content Blocking
            Ads: uBlock Origin
            Scripts: uMatrix
            Canvas Elements: Canvas Blocker
            WebRTC: Disable WebRTC
            CDN Usage: Decentraleyes
            Cookie Management: Cookie AutoDelete
        Isolation / Containers: Firefox Multi-Account Containers
    Mobile Applications
        Exodus Privacy
        Package Disabler Pro
        OpenVPN + VPN Provider S/W
        Eliminate Google Tools on Mobile Devices
            Google Search -> DuckDuckGo or SearX
            GMail -> K-9 Mail
            GApps -> "Simple" Tools
            Android Keyboard -> AnySoftKeyboard
            Stock Android Launcher -> Open Launcher
            Stock Android Camera -> Open Camera
            Stock Android Contacts / Dialer -> True Phone
            Google Maps -> Open Street Maps (OSM)
            Play Store -> F-Droid + APKMirror
            YouTube -> PeerTube + ??? 
        Cloud File Storage -> SyncThing
Our Results

Implementing the above list took far more time than we anticipated. And some of these things require some caveats. For example, there is no clear competitor for YouTube. Yes, there are a couple of noteworthy challengers (e.g., PeerTube, D-Tube, etc). But none have achieved feature sufficiency. So if you must use YouTube, then please do so in a secure browser.

You might quibble with some of the steps that we took. But we believe that we have a very strong case for each of these decisions and each of these steps. And I will gladly discuss the “why’s” for any of them – if you’re interested. Until then, we have “cranked it up to eleven”. We believe that we are in a better position regarding our mobile privacy. And after today, our current “eleven” will become the new ten! Continuous process improvement, for the win!

Long Past Time For Good Security Headers

HTTP Security Headers Status
The State of HTTP Security Headers

Over the past few months, I’ve focused my attention upon how you can be safer while browsing the Internet. One of the most important recommendations that I have made is for you to reduce (or eliminate) the loading and execution of unsafe content. So I’ve recommended ad blockers, a plethora of browser add-ons, and even the hardening of your premise-based services (e.g., routers, NAS systems, IoT devices, and DNS). Of course, this only addresses one side of the equation (i.e., the demand side). In order to improve the ‘total experience’ for your customers, you will also need to harden the services that you provide (i.e., the supply side). And one of the most often overlooked mechanisms for improvement is the proper use of HTTP security headers.

Background

According to the Open Web Application Security Project (OWASP), content injection is still the single largest class of vulnerabilities that content providers must address. When coupled with cross-site scripting (XSS), it is clear that hostile content poses an existential threat to many organizations. Yes, consumers must block all untrusted content as it arrives at their browser. But every site owner should first ensure that they inform every client about the content that they will be sending. Once these declarations are made, the client (i,e, browser) can then act to trust or distrust the content that they receive.

The notion that a web site should declare the key characteristics of its content stream is nothing new. What we now call a content security policy (CSP) has been around for a very long time. Indeed, the fundamental descriptions of content security policies were discussed as early as 2004. And the first version of the CSP standard was published back in 2012.

CSP Standards Exist – But Are Not Universally Used

According to the White Hat 2018 “Website Security Statistics Report”, a number of industries still operate chronically vulnerable websites. White Hat estimates that 52% of Accommodations / Food Services web sites are “Always Vulnerable”. Moreover, an additional 7% of these websites are “Frequently Vulnerable” (ie., vulnerable for at least 263 days a year). Of course, that is the finding for one sector of the broader marketplace. But things are just as bad elsewhere. In the healthcare market, 50% of websites are considered “Always Vulnerable” with an additional 10% classified as “Frequently Vulnerable”.

Unfortunately, few websites actually use one of the most potent elements in their arsenal. Most website operators have established software upgrade procedures. And a large number of them have acceptable auditing and reporting procedures. But unless they are subject to regulatory scrutiny, few organizations have even considered implementing a real CSP.

Where To Start

So let’s assume that you run a small business. And you had your daughter/son, niece/nephew, friend of the family, or kid next door build your website. Chances are good that your website doesn’t have a CSP. To check this out for sure, you should go to https://securityheaders.com and see if you have appropriate security headers for your website.

In my case, I found that my website security posture was unacceptably low. [Note: As a National Merit Scholar and Phi Beta Kappa member, anything below A+ is unacceptable.] Consequently, I looked into how I could get a better security posture. Apart from a few minor tweaks, my major problem was that I didn’t have a good CSP in place.

Don’t Just Turn On A Security Policy

Whether you code the security headers in your .htaccess file or you use software to generate the headers automatically, you will be tempted to just turn on a security policy. While that is a laudable sentiment, I urge you not to do this – unless your site is not live. Instead, make sure that you use your proposed CSP in “report only” mode – as a starting point.

Of course, I chose the engineer’s path and just set up a default-src directive to allow only local content. Realistically, I just wanted to see content blocked. So I activated my CSP in “blocking” mode (i.e., not “report only”) mode. And as expected, all sorts of content was blocked – including the fancy sliders that I had implemented on my front page.

I quickly reset the policy to “report only” so that I could address the plethora of problems. And this time, I worked each problem one at a time. Surprisingly, it really did take some time. I had to determine which features came from which external sources. I then had to add these sources to the CSP. This process was very much like ‘whitelisting’ external sources in an ad blocker. But once I found all of the external sources, I enabled “blocking” mode. This time, my website functioned properly.

Bottom Line

In the final analysis, I learned a few important things.

  1. Security headers are an effective means of informing client browsers about the characteristics of your content – and your content sources. Consequently, they are an excellent means of displaying your content whitelist to any potential customer.
  2. Few website builders automatically generate security headers. There is no “Great and Powerful Oz” who will code all of this from behind the curtains – unless you specifically pay someone to do it. Few hosting platforms do this by default.
  3. Tools do exist to help with coding security headers – and content security policies. In the case of Wrodpress, I used HTTP Headers (by Dimitar Ivanov).
  4. While no single security approach can solve all security issues, using security headers should be added to the quiver of tools that you use when addressing website content security.

Privacy 0.8 – My Never-ending Privacy Story

This Is The Song That Never Ends
This Is The Song That Never Ends

Privacy protection is not a state of being; it is not a quantum state that needs to be achieved. It is a mindset. It is a process. And that process is never-ending. Like the movie from the eighties, the never-ending privacy story features an inquisitive yet fearful child. [Yes, I’m casting each of us in the that role.] This child must assemble the forces of goodness to fight the forces of evil. [Yes, in this example, I’m casting the government and corporations in the role of evil doers. But bear with me. This is just story-telling.] The story will come to an end when the forces of evil and darkness are finally vanquished by the forces of goodness and light.

It’s too bad that life is not so simple.

My Never-ending Privacy Battle Begins

There is a tremendous battle going on. Selfish forces are seeking to strip us of our privacy while they sell us useless trinkets that we don’t need. There are a few people who truly know what is going on. But most folks only laugh whenever someone talks about “the great Nothing”. And then they see the clouds rolling in. Is it too late for them? Let’s hope not – because ‘they’ are us.

My privacy emphasis began a very long time ago. In fact, I’ve always been part of the security (and privacy) business. But my professional focus began with my first post-collegiate job. After graduation, I worked for the USAF on the Joint Cruise Missile program. My role was meager. In fact, I was doing budget spreadsheets using both Lotus 1-2-3 and the SAS FS-Calc program. A few years later, I remember when the first MIT PGP key server went online. But my current skirmishes with the forces of darkness started a few years ago. And last year, I got extremely serious about improving my privacy posture.

My gaze returned to privacy matters when I realized that my involvement on social media had invalidated any claims I could make about my privacy, I decided to return my gaze to the 800-pound gorilla in the room.

My Never-ending Privacy Battle Restarts

Since then, I’ve deleted almost all of my social media accounts. Gone are Facebook, Twitter, Instagram, Foursquare, and a laundry list of other platforms. I’ve deleted (or disabled) as many Google apps as I can from my Android phone (including Google Maps). I’ve started my new email service – though the long process of deleting my GMail accounts will not end for a few months.

At the same time, I am routinely using a VPN. And as I’ve noted before, I decided to use NordVPN. I have switched away from Chrome and I’m using Firefox exclusively. I’ve also settled upon the key extensions that I am using. And at this moment, I am using the Tor browser about half of the time that I’m online. Finally, I’ve begun the process of compartmentalizing my online activities. My first efforts were to use containers within Firefox. I then started to use application containers (like Docker) for a few of my key infrastructure elements. And recently I’ve started to use virtual guests as a means of limiting my online exposure.

Never-ending Progress

But none of this should be considered news. I’ve written about this in the past. Nevertheless, I’ve made some significant progress towards my annual privacy goals. In particular, I am continuing my move away from Windows and towards open source tools/platforms. In fact, this post will be the first time that I am publicly posting to my site from a virtual client. In fact, I am using a Linux guest for this post.

For some folks, this will be nothing terribly new. But for me, it marks a new high-water mark towards Windows elimination. As of yesterday, I access my email from Linux – not Windows. And I’m blogging on Linux – not Windows. I’ve hosted my Plex server on Linux – not Windows. So I think that I can be off of Windows by the end of 2Q19. And I will couple this with being off GMail by 4Q19.

Bottom Line

I see my goal on the visible horizon. I will meet my 2019 objectives. And if I’m lucky, I may even exceed them by finishing earlier than I originally expected. So what is the reward at the end of these goals? That’s simple. I get to set a new series of goals regarding my privacy.

At the beginning of this article, I said, “The story will come to an end when the forces of evil and darkness are finally vanquished by the forces of goodness and light.” But the truth is that the story will never end. There will always be individuals and groups who want to invade your privacy to advance their own personal (or collective) advantage. And the only way to combat this will be a never-ending privacy battle.

Secure File Transfer Ain’t So Easy

Secure File Sharing Ain't So Easy
Secure File Sharing Ain’t So Easy

For years, businesses and governments have used secure file transfer to send sensitive files across the Internet. Their methods included virtual private networks, secure encrypted file transfer (sftp and ftps), and transfers of secure / encrypted files. Today, the “gold standard” probably includes all three of these techniques simultaneously.

But personal file transfer has been quite different. Most people simply attach an un-encrypted file to an email message that is then sent across an un-encrypted email infrastructure. Sometimes, people place an un-encrypted file on a USB stick. These people perform a ‘secure file transfer’ by handing the USB stick to a known (and trusted) recipient. More recently, secure file transfers could be characterized by trusting a third-party data hosting provider. For many people, these kinds of transfers are secure enough.

Are Personal File Transfers Inherently Secure

These kinds of transfers are NOT inherently secure.

  • In the case of email transfers, the only ‘secure’ element might be a user/password combination on the sender or receiver’s mailbox. Hence, the data may be secure while at rest. But Internet email is completely insecure while in transit. Some enterprising people have exploited secure messages (by using tools like PGP/GPG). Others have secured their SMTP connections across a VPN – or an entirely private network. Unfortunately, email is notorious for being sent across numerous relays – any one of which could forward messages insecurely or even read un-encrypted messages. And there is very little validation performed on email metadata (e.g., no To: or From: field validation).
  • Placing a file on a USB stick is better than nothing. But there are a few key problems when using physical transfer. First, you have to trust the medium that is being used. And most USB devices can be picked up and whisked away without their absence even being noticed. Yes, you can use encryption to provide protection while the data is on the device. But most folks don’t do this. Second, even if the recipient treats the data with care, the data itself remains on an inherently mobile (and inherently less secure) medium.
  • Fortunately, modern users have learned not to use email and not to use physical media for secure file transfer. Instead, many people choose to use a cloud-based file hosting service. These services require logins to access the data. And some of these services even encrypt files while on their storage arrays. And if you’re really thorough when selecting your service provider, secure end-to-end transmission of your data may also be available. Of course, the weakest point of securing such transfers is the service provider. Because the data is at rest in their facility, they would have the availability to compromise the data. So this model requires trusting a third-party to protect your assets. Yes, this is just like a bank that protects your demand deposits. But if you aren’t paying for a trustworthy partner, then don’t be surprised if they find some other means to monetize you and your assets.
What Are The Characteristics of Secure File Transfers?

Secure file transfers should have the following characteristics:

  • The data being transferred should be encrypted by the originator and decrypted by the recipient.
  • Both the originator and the recipient should be authenticated before access is granted – either to the secure transport mechanism or to the data itself.
  • All data transfers must be secured from the originator to the recipient.
  • If possible, there should be no points of relay between the originator and the recipient OR there should be no requirements for a third-party to store and forward the complete message.
What Is Your Threat Model?

Are all of these characteristics required? The paranoid security analyst within me says, “Of course they are all required.” That same paranoid person would also add requirements concerning the strength of all of the ciphers that are to be used as well as the use of multi-factor authentication. But the requirements that you have should be driven by the threats that you are trying to mitigate – not by the coolest or most lauded technologies.

For most people, the threat that they are seeking to mitigate is one or more of the following: a) the seizure and exploitation of data by hackers, b) the seizure and exploitation of data by ruthless criminals and corporations, or c) the seizure and exploitation of data by an obsessive (and/or adversarial) governmental authority – whether foreign or domestic. Of course, some people are trying to protect against corporate espionage. Others are seeking to protect against hostile foreign actors. But for the sake of this discussion, I will be focusing upon the threat model encountered by typical Internet users.

Typical Threats For The Common American Family

While some of us do worry about national defense and corporate espionage, most folks are just trying to live their lives in obscurity – free to do the things that they enjoy and the things that they are called to do. They don’t want some opportunistic thief stealing their identity – and their family’s future. They don’t want some corporation using their browsing and purchasing habits in order to generate corporate ad revenue. And they don’t want a government that could obstruct their freedoms – even if it was meant in a benign (but just) cause.

So what does such a person need in a secure file transfer capability? First, they need file transfers to be encrypted – from their desk to the desk of the ultimate recipient. Second, they don’t want to “trust” any third-party to keep their data “safe”. Third, they want something that can use the Internet for transport – but do so in relative safety.

Enter Onionshare

It is rather complex to easily – and securely – share files across the Internet. It can be done easily – via email, ftp, and cloud servers. It can be done reasonably securely – via encrypted email, secure ftp, p2p (e.g., BitTorrent), and even secure cloud services. But all of these secure solutions are relatively difficult to implement. What is needed is a simple tool. Onionshare is just such a tool.

Onionshare was developed by Micah Lee in 2014. It is an application that sets up a hidden service on the TOR network. TOR is a multi-layered encryption and routing tool that was originally developed by the Department of the Navy. Today, it is the de facto reference implementation for secure, point-to-point connections across the Internet. And while it is not a strictly anonymous service, it offer a degree of anonymity that is well beyond the normal browsing experience. For a detailed description of Tor, take a look here. And for one of my first posts about TOR, look here.

Onionshare sets up a web server. It then establishes that server as a .onion service on the TOR network. The application then generates a page (and a URL) for that service. This URL points to a web page with the file(s) to be transferred. The person hosting the file(s) can then securely send the thoroughly randomized URL to the recipient. Once the recipient receives the URL, the recipient can download the file(s). After the secure file transfer is completed, the service is stopped – and the file(s) no longer available on TOR.

Drawbacks

This secure file transfer model has a few key weaknesses. First and foremost, the URL that is presented must be sent securely to the recipient. This can be done via secure email (e.g., ProtonMail to ProtonMail) or via secure IM (e.g., Signal). But if the URL is sent via insecure methods, the data could be potentially hijacked by a hostile actor. Second, there is no authentication that is performed when the ‘recipient’ connects to the .onion service. Whoever first opens that URL in a TOR browser can access (and later delete) the file(s). So the security of the URL link is absolutely paramount. But as there are no known mechanisms to index hidden .onion servers, this method is absolutely sufficient for most casual users who need to securely send sensitive data back-and-forth.

Bottom Line

If you want to securely send documents back-and-forth between yourself and other individuals, then Onionshare is a great tool. It works on Windows, MacOS, and a variety of Linux distros. And the only client requirement to use the temporary .onion server is a TOR-enabled browser. In short, this is about as ‘fire and forget’ as you could ever expect to find.

Reducing Threat Surface – Windows Minimization

Breaking the Cycle of Addiction
Let Go of the Past

Last year, my household quit cable TV. The transition wasn’t without its hiccups. But leaving cable has had some great benefits. First, we are paying less money per month. Second, we are watching less TV per month. Third, I have learned a whole lot of things about streaming technologies and about over-the-air (OTA) TV options. Last year was also the year that I put a home automation program into effect. But both of these initiatives were done in 2018. Now I’ve decided that security and Windows minimization will be the key household technology initiatives for 2019.

How Big Is Your Threat Surface?

What is “Windows minimization”? That is simple. “Windows minimization” is the intentional reduction of Windows instances within your organization. Microsoft Windows used to be the platform for innovation and commercialization. Now it is the platform for running legacy systems. Like mainframes and mini-computers before them, Windows is no longer the “go to” platform for new development. C# and .Net are no longer the environment for new applications. And SQL server never was the “go to” platform for most databases. And if you look at the total number of shipped operating systems, it is clear that Android and IOS have clearly become the only significant operating systems on the mobile platform.

Nevertheless, Microsoft products remain the most vulnerable operating system products (based upon the total number of published CVE alerts). Adobe remains the most vulnerable “application” product family. But these numbers only reflect the total number of “announced” vulnerabilities. They don’t take the total number of deployed or exploited systems into account. Based upon deployed instances, Android and iOS remain the most exploited platforms.

Microsoft’s vulnerable status isn’t because their products are inherently less safe. To be candid, all networked computing platforms are unsafe. But given the previous predominance of Windows, Microsoft technologies were the obvious target for most malware developers.

Of course, Windows dominance is no longer the case. Most people do the majority of their casual computing on their phones – which use either Linux (Android) or Unix (iOS). And while Microsoft’s Azure platform is a fine web/cloud platform, most cloud services use Linux and/or cloud services like OpenStack or AWS. So the demand for Windows is declining while the security of all other platforms is rapidly improving.

The Real Reason For Migrating

It is possible to harden your Windows systems. And it is possible to fail to harden your Linux systems. However, it is not possible to easily port a product from one OS to another – unless the software vendor did that for you already. In most cases, if the product you want isn’t on the platform that you use, then you either need to switch your operating platform or you need to convince your software supplier to support your platform.

Heading To The Tipping Point

It is for this reason that I have undertaken this Windows minimization project. New products are emerging every day. Most of them are not on Windows. They are on alternative platforms. Every day, I find a new widget that won’t run on Windows. Of course, I can always run a different operating system on a Windows-host.  But once the majority of my applications run on Linux, then it will make more sense to run a Linux-hosted vitualization platform and host a Windows guest system for the legacy apps.

And I am rapidly nearing that point. My Home Assistant runs on a Raspberry Pi. It has eleven application containers running within Docker (on HassOS). My DNS system runs on a Raspberry Pi. My OpenVPN system is hosted on a Pi.

Legacy Anchors

But a large number of legacy components remain on Windows. Cindy and I use Microsoft Office for general documents – though PDF documents from LibreOffice are starting to increase their share of total documents created. My podcasting platform (for my as yet unlaunched podcast) runs on Windows. And my Plex Media Server (PMS) runs on Windows.

Fortunately, PMS runs on Linux. So I built am Ubuntu 18.10 system to run on VirtualBox. And just as expected, it works flawlessly. Yes, I had to figure a few things out along the way – like using the right CIFS file system to access my NAS. But once I figured these minor tweaks out, I loaded all of my movies onto the new Plex server. I fully expect that once I transition my remaining apps, I’ll turn my Windows Server into an Ubuntu 18.04 LTS server.

Final Takeaways

I have taken my first steps. I’ve proven that Plex will run on Linux. I know that I can convert mobile print services from Windows to Linux. And I can certainly run miscellaneous apps (like TurboTax) on a Windows guest running on Linux. But I want to be sure before I convert my Windows server to Linux. So I will need to complete a software usage survey and build my data migration plan

I wonder how long it will be before I flip the switch – once and for all.

Riotous Babel or Collaborative Bazaar

Matrix: Decentralized and Secure Collaboration
Matrix: Decentralized and Secure Collaboration

Every group has their own collection of stories. In the Judeo-Christian world, the Tower of Babel is one such story. It has come to symbolize both the error of hubris and the reality of human disharmony. Within the open source community, the story of the Cathedral and the Bazaar (a.k.a., CatB) is another such story. It symbolizes the two competing schools of software development. These schools are: 1) the centralized management of software by a priestly class (i.e., the cathedral), and the decentralized competition found in the cacophonous bazaar. In the case of computer-based collaboration, it is hard to tell whether centralized overlords or a collaborative bazaar will eventually win.

Background

When I began my career, collaboration tools were intimate. You either discussed your thoughts over the telephone, you went to someone face-to-face, or you discussed the matter in a meeting . The sum total of tools available were the memorandum, the phone, and the meeting. Yes, the corporate world did have tools like PROFS and DISOSS. But both sets of tools were hamstrung either by their clumsiness (e.g., the computer “green screens”) or by the limitations of disconnected computer networks.

By the mid-eighties, there were dozens of corporate, academic, and public sector email systems. And there were just as many collaboration tools. Even the budding Internet had many different tools (e.g., sendmail, postfix, pine, elm).

The Riotous Babel

As my early career began to blossom (in the mid-nineties), I had the privilege of leading a bright team of professionals. Our fundamental mission was the elimination of corporate waste. And much of this waste came in the form of technological redundancy. So we consolidated from thirteen (13) different email clients to a single client. And we went from six (6) different email backbones to one backbone. At first, we chose to use proprietary tools to accomplish these consolidations. But over time, we moved towards more open protocols (like SMTP, X.500, and XMPP).

Since then, collaboration tools have moved from email and groupware tools (e.g., Lotus Notes) to web-based email and collaboration tools (e.g., Exchange and Confluence/Jira). Then the industry moved to “next-generation” web tools like Slack and even Discord. All of these “waves” of technology had one thing in common: they were managed by a centralized group of professionals who had arcane knowledge AND sufficient funding. Many of these tools relied upon open source components. But in almost every case, the total software solution had some “secret sauce” that ensured dominance through proprietary intellectual property.

The Times, They Are A Changing

Over the past few years, a new kind of collaboration tool has begun to emerge: the decentralized and loosely coupled system. The foremost tool of this kind is Matrix (and clients like Riot). In this model, messages flow between decentralized servers. Data sent between these servers is encrypted. And the set of data transferred between these servers is determined by the “interests” of local accounts/users. Currently, the directory for this network is centralized. There is a comprehensive ‘room’ directory at https://vector.im. But work is underway to build a truly decentralized authentication and directory system.

My Next Steps

One of the most exciting things about having a lab is that you get to experiment with new and innovative technologies. So when Franck Nijhof decided to add a Matrix server into the Hass.io Docker infrastructure, I leaped at the chance to experiment. So as of last night, I added a Matrix instance to my Home Assistant system. After a few hours, I am quite confident that we will see Matrix (or a similar tool) emerge as an important part of the next wave of IoT infrastructure. But until then, I am thrilled that I can blend my past and my future – and do it through a collaborative bazaar.

When Free (“as in VPN”) Isn’t Free!

Nothing of value is free
Nothing Of Value Is Free

The modern Internet is a dangerous place. [Note: It has always been ‘dangerous’. But now the dangers are readily apparent.] There are people and institutions that want to seize your private information and use it for their own advantages. You need look no further than Facebook (or China) to realize this simple fact. As a result of these assaults on privacy, many people are finally turning to VPN ‘providers’ as a means of improving their security posture. But free VPN services may not be so free.

Background

In the eighties, universities in the US (funded by the US federal government) and across the globe began to freely communicate – and to share the software that enabled these communications. This kind of collaboration helped to spur the development of the modern Internet. And in the nineties, free and open source software began to seize the imagination (and self-interest) of many corporations.

At that time, there were two schools of thought concerning free software: 1) The RMS school believed that software was totally free (“as in speech”) and should be treated as a community asset, and 2) The ESR school believed that open source was a technical means of accelerating the development of software and improving the quality of software. Both schools were founded upon the notion that free and open software was “‘free’ as in speech, not as in ‘beer’.” [Note: To get a good insight into the discussions of free software, I would encourage you to read The Cathedral and the Bazaar by Eric S. Raymond.]

While this debate raged, consumers had become accustomed to free and open software – when free meant “as in beer”. By using open source or shareware tools, people could get functional software without any licensing or purchasing fees. Some shareware developers nagged you for a contribution. Others just told you their story and let you install/use their product “as is”. So many computer consumers became junkies of the “free” stuff. [Insert analogies of drug dealers (or cigarette companies) freely distributing ‘samples’ of their wares in order to hook customers.]

VPN Services: The Modern Analog

Today, consumers still love ‘free stuff’. Whether this is ‘free’ games for their phones, ‘free’ email services for their families (or their businesses), or free security products (like free anti-virus and free anti-malware tools). And recently, free VPN services have begun to emerge. I first saw them delivered as a marketing tool. A few years ago, the Opera team bundled a fee VPN with their product in the hopes that people would switch from IE (or Firefox) to Opera.

But free VPN services are now available everywhere. You can log into the Apple Store or the Play Store and find dozens of free VPN offers. So when people heard that VPN services offer encryption and they saw that ‘vetted’ VPN services (i.e., apps/services listed in their vendor’s app store) were available for free, people began to exploit these free VPN services.

Who Pays When Free VPN Isn’t Free?

But let’s dig into this a little. Does anyone really believe that free VPN services (or software) are free (i.e., “as in beer”)? To answer this question, we need only look to historical examples. Both FOSS and shareware vendors leveraged the ‘junkie’ impulse. If they could get you to start using their product, they could lock you into their ecosystem – thus guaranteeing massive collateral purchases. But their only costs were their time – measured in the labor that they poured into developing and maintaining their products.

Today, VPN service providers also have to recoup the costs of their infrastructure. This includes massive network costs, replicated hardware costs, and substantial management costs. So someone has to overcome these massive costs. And this is done out of the goodness of their hearts? Hardly.

Only recently have we learned that free social media products are paid for through the resale of our own personal data. When things are ‘free’, we are the product being sold. So this fact begs the question: who is paying for this infrastructure when you aren’t paying for it?

Free – “As In ‘China'” – Paid For It

Recently, Top10VPN (a website operated by London-based Metric Labs Ltd) published a report about free VPN providers listed on the App Store and the Play Store. What they found is hardly surprising.

  • 59% of apps have links to China (17 apps)
  • 86% of apps had unacceptable privacy policies, issues include:
  • 55% of privacy policies were hosted in an amateur fashion Free WordPress sites with ad
  • 64% of apps had no dedicated website – several had no online presence beyond app store listings.
  • Over half (52%) of customer support emails were personal accounts, ie Gmail, Hotmail, Yahoo etc
  • 83% of app customer support email requests for assistance were ignored

Just because a VPN provider has sketchy operating practices or is somehow loosely associated with Chinese interests does not necessarily mean that the service is compromised. Nor does it mean that your identity has been (or will be) compromised. It does mean that you must double-check your free provider. And you need to know that free is never free. Know what costs your are bearing BEFORE you sign up for that free VPN.

William Chalk (published @ Hackernoon) may have said it best: “In allowing these opaque and unprofessional companies to host potentially dangerous apps in their stores, Google and Apple demonstrate a failure to properly vet the publishers utilizing their platform and curate the software promoted therein.” But resolution of these shortcomings is not up to Apple and Google. It is up to us. We must take action. First, we must tell Apple and Google just how disappointed we are with their product review processes. And then we must vote with our dollars – by using fee-based VPN’s. Why? Because free VPN may not ensure free speech.

**Full Disclosure: I am a paid subscriber of a fee-based VPN service. And at this time, I trust my provider. But even I double-checked my provider after reading this article.

2019 Resolution #2: Blocking Online Trackers

The Myth of Online Privacy
The Myth of Online Privacy
Background

Welcome to the New Year. This year could be a banner year in the fight to ensure our online privacy. Before now, the tools of surveillance have overwhelmed the tools of privacy. And the perceived need for new Internet content has outweighed the real difficulty of protecting your online privacy. For years, privacy advocates (including myself) have chanted the mantra of exploiting public key encryption. We have told people to use Tor or a commercial VPN. And we have told people to start using two-factor authentication. But we have downplayed the importance of blocking online trackers. Yes, security and privacy advocates did this for themselves. But most did not routinely recommend this as a first step in protecting the privacy of our clients.

But the times are changing.

Last year (2018) was a pivotal time in the struggle between surveillance and privacy. The constant reporting of online hacks has risen to a deafening roar. And worse still, we saw the shepherds of our ‘trusted platforms’ go under the microscope. Whether it was Sundar Pichai of Google or Mark Zuckerberg of Facebook, we have seen tech leaders (and their technologies) revealed as base – and ultimately self-serving. Until last year, few of us realized that if we don’t pay for a service, then we are the product that the service owners are selling. But our eyes have now been pried open.

Encryption Is Necessary

Security professionals were right to trumpet the need for encryption. Whether you are sending an email to your grandmother or inquiring about the financial assets that you’ve placed into a banker’s hands, it is not safe to send anything in clear text. Think of it this way. Would you put your tax filing on a postcard so that the mail man – and every person and camera between you and the IRS – could see your financial details? Of course you wouldn’t. You’d seal it in an envelope. You might even hand deliver it to an IRS office. Or more recently, you might send your return electronically – with security protections in place to protect key details of your financial details.

But these kinds of protections are only partial steps. Yes, your information is secure from when it leaves your hands to when it enters the hands of the intended recipient. But what happens when the recipient gets your package of information?

Encryption Is Not Enough

Do the recipients just have your ‘package’ of data or do they have more? As all of us have learned, they most certainly have far more information. Yes, our ISP (i.e., the mail man) has no idea about the message. But what happens when the recipient at the other end of the pipe gets your envelope? They see the postmarks. They see the address. But they could also lift fingerprints from the envelope. And they can use this data. At the same time, by revealing your identity, you have provided the recipient with critical data that could be used to profile you, your friends and family, and even your purchasing habits.

So your safety hinges upon whether you trust the recipients to not disclose key personal information. But here’s the rub. You’ve made a contract with the recipient whereby they can use any and all of your personally identifiable information (PII) for any purpose that they choose. And as we have learned, many companies use this information in hideous way.

Resist Becoming The Product

This will be hard for many people to hear: If you’re not paying for a service, then you shouldn’t be surprised when the service provider monetizes any and all information that you have willingly shared with them. GMail is a great service – paid for with you, your metadata, and every bit of content that you put into your messages. Facebook is phenomenal. But don’t be surprised when MarkeyZ sells you out.

Because of the lessons that I’ve learned in 2018, I’m starting a renewed push towards improving my privacy. Up until now, I’ve focused on security. I’ve used a commercial VPN and/or Tor to protect myself from ISP eavesdropping. I’ve built VPN servers for all of my clients. I’ve implemented two-factor authentication for as many of my logons as my service providers will support.

Crank It Up To Eleven

And now I have to step up my game.

  1. I must delete all of my social media accounts. That will be fairly simple as I’ve already gotten rid of Facebook/Instagram, Pinterest, and Twitter. Just a few more to go. I’m still debating about LinkedIn. I do pay for premium services. But I also know that Microsoft is selling my identity. For the moment, I will keep LinkedIn as it is my best vehicle for professional interactions.
  2. I may add a Facebook account for the business. Since many customers are on Facebook, I don’t want to abandon potential customers. But I will strictly separate my public business identity/presence from my personal identity/presence.
  3. I need to get off of Gmail. This one will be tougher than the first item. Most of my contacts know me from my GMail address (which I’ve used for over fifteen years). But I’ve already created two new email addresses (one for the business and one on ProtonMail). My current plan is to move completely off of GMail by the end of 1Q19.
  4. I am going to exclusively use secure browsing for almost everything. I’ve used ad-blockers for both the browser and for DNS. And I’ve used specific Firefox extensions for almost all other browsing activities that I have done. I will now try and exclusively use the Tor Browser on a virtual machine (i.e., Whonix) and implement NoScript wherever I use that browser. Let’s hope that these things will really reduce my vulnerability on the Internet. I suspect that I will find some sites that just won’t work with Tor (or with NoScript). When I find such sites, I’ll have to intentionally choose whether to use the site unprotected or set up a sandbox (and virtual identities) whenever I use these sites. Either way, I will run such sites from a VM – just to limit my exposure.
  5. I will block online trackers by default. Firefox helps. NoScript also helps. But I will start routinely using Privacy Badger and uMatrix as well.
Bottom Line

In the final analysis, I am sure that there are some compromises that I will need to make. Changing my posture from trust to distrust and blocking all online trackers will be the hardest – and most rewarding – step that I can make towards protecting my privacy.