Mobile Privacy Demands Some Sacrifices

Managing mobile privacy is complex
Managing Mobile Privacy

As noted previously, the effort to maintain anonymity while using the Internet is a never-ending struggle. We have been quite diligent about hardening our desktop and laptop systems. This included a browser change, the addition of several browser add-ons, the implementation of a privacy-focused DNS infrastructure, and the routine use of a VPN infrastructure. But while we focused upon the privacy of our static assets, our mobile privacy was still under siege.

Yes, we had done a couple of routine things (e.g., browser changes, add-one, and use of our new DNS infrastructure). But we had not yet spent any focused time upon improving the mobile privacy of our handheld assets. So we have just finished spending a few days addressing quite a few items. We hope that these efforts will help to assure enhanced mobile privacy.

Our Mobile Privacy Goals

Before outlining the key items that we accomplished, it is important to highlight our key goals:

  1. Start fresh. It would be nearly impossible to retrofit a hardened template onto an existing base – especially if you use a BYOD strategy. That’s because the factory images for most phones are designed to leverage existing tools – most of which exact an enormous price in terms of their privacy concessions.
  2. Decide whether or not you wish to utilize open source tools (that have been reviewed) or trust the vendor of the applications which you will use. Yes, this is the Apple iOS v. Android issue. And it is a real decision. If it were just about cost, you would always
  3. Accept the truth that becoming more private (and more anonymous) will require breaking the link to most Google tools. Few of us realize just how much data each and every mobile app collects. And on Android phones, this “tax” is quite high. For Apple phones, the Google “tax” is not as high. But that “good news” is offset by the “bad news” that Apple retains exclusive rights to most of its source code. Yes, the current CEO has promised to be good. [Note: But so did the original Google leaders. And as of today, Google has abandoned its promise to “do no evil”.] But what happens when Mr. Tim Cook leaves?
  4. Act on the truth of the preceding paragraph. That means exchanging Google Apps for apps that are more open and more privacy-focused. If you want to understand just how much risk you are accepting when using a stock Android phone, just install Exodus Privacy and see what your current apps can do. The terrifying truth is that we almost always click the “Allow” button when apps are installed. You must break that habit. And you must evaluate the merits of every permission request. Remember, the power to decide your apps is one of the greatest powers that you have. So don’t take it lightly.
  5. Be aware that Google is not the only company that wishes to use you (and your data) to add profits to their bottom line. Facebook does it. Amazon does it. Apple does it. Even Netflix does it. In fact, almost everyone does it. Can you avoid being exploited by unfeeling corporate masters? Sure, if you don’t use the Internet. But since that is unlikely, you should be aware that you are the most important product that most tech companies sell. And you must take steps to minimize your exploitation risk.
  6. If and where possible, we will host services on our own rather than rely upon unscrupulous vendors. Like most executives, I have tremendous respect for our partner providers. But not every company that we work with is a partner. Some are just vendors. And vendors are the ones who will either exploit your data or take no special interest in protecting your data. On the other hand, no one knows your business better than you do. And no one cares about your business as much as you do. So wherever possible, trust you own teams – or your valued (and trusted) partners.
Our Plan of Attack

With these principles in mind, here is our list of what we’ve done since last week:

    Update OS software for mobile devices
        Factory reset of all mobile devices
        SIM PIN
        Minimum 16-character device PIN
    Browser: Firefox & TOR Browser
    Search Providers: DuckDuckGo
    Browser Add-ons
        Content Blocking
            Ads: uBlock Origin
            Scripts: uMatrix
            Canvas Elements: Canvas Blocker
            WebRTC: Disable WebRTC
            CDN Usage: Decentraleyes
            Cookie Management: Cookie AutoDelete
        Isolation / Containers: Firefox Multi-Account Containers
    Mobile Applications
        Exodus Privacy
        Package Disabler Pro
        OpenVPN + VPN Provider S/W
        Eliminate Google Tools on Mobile Devices
            Google Search -> DuckDuckGo or SearX
            GMail -> K-9 Mail
            GApps -> "Simple" Tools
            Android Keyboard -> AnySoftKeyboard
            Stock Android Launcher -> Open Launcher
            Stock Android Camera -> Open Camera
            Stock Android Contacts / Dialer -> True Phone
            Google Maps -> Open Street Maps (OSM)
            Play Store -> F-Droid + APKMirror
            YouTube -> PeerTube + ??? 
        Cloud File Storage -> SyncThing
Our Results

Implementing the above list took far more time than we anticipated. And some of these things require some caveats. For example, there is no clear competitor for YouTube. Yes, there are a couple of noteworthy challengers (e.g., PeerTube, D-Tube, etc). But none have achieved feature sufficiency. So if you must use YouTube, then please do so in a secure browser.

You might quibble with some of the steps that we took. But we believe that we have a very strong case for each of these decisions and each of these steps. And I will gladly discuss the “why’s” for any of them – if you’re interested. Until then, we have “cranked it up to eleven”. We believe that we are in a better position regarding our mobile privacy. And after today, our current “eleven” will become the new ten! Continuous process improvement, for the win!

Long Past Time For Good Security Headers

HTTP Security Headers Status
The State of HTTP Security Headers

Over the past few months, I’ve focused my attention upon how you can be safer while browsing the Internet. One of the most important recommendations that I have made is for you to reduce (or eliminate) the loading and execution of unsafe content. So I’ve recommended ad blockers, a plethora of browser add-ons, and even the hardening of your premise-based services (e.g., routers, NAS systems, IoT devices, and DNS). Of course, this only addresses one side of the equation (i.e., the demand side). In order to improve the ‘total experience’ for your customers, you will also need to harden the services that you provide (i.e., the supply side). And one of the most often overlooked mechanisms for improvement is the proper use of HTTP security headers.

Background

According to the Open Web Application Security Project (OWASP), content injection is still the single largest class of vulnerabilities that content providers must address. When coupled with cross-site scripting (XSS), it is clear that hostile content poses an existential threat to many organizations. Yes, consumers must block all untrusted content as it arrives at their browser. But every site owner should first ensure that they inform every client about the content that they will be sending. Once these declarations are made, the client (i,e, browser) can then act to trust or distrust the content that they receive.

The notion that a web site should declare the key characteristics of its content stream is nothing new. What we now call a content security policy (CSP) has been around for a very long time. Indeed, the fundamental descriptions of content security policies were discussed as early as 2004. And the first version of the CSP standard was published back in 2012.

CSP Standards Exist – But Are Not Universally Used

According to the White Hat 2018 “Website Security Statistics Report”, a number of industries still operate chronically vulnerable websites. White Hat estimates that 52% of Accommodations / Food Services web sites are “Always Vulnerable”. Moreover, an additional 7% of these websites are “Frequently Vulnerable” (ie., vulnerable for at least 263 days a year). Of course, that is the finding for one sector of the broader marketplace. But things are just as bad elsewhere. In the healthcare market, 50% of websites are considered “Always Vulnerable” with an additional 10% classified as “Frequently Vulnerable”.

Unfortunately, few websites actually use one of the most potent elements in their arsenal. Most website operators have established software upgrade procedures. And a large number of them have acceptable auditing and reporting procedures. But unless they are subject to regulatory scrutiny, few organizations have even considered implementing a real CSP.

Where To Start

So let’s assume that you run a small business. And you had your daughter/son, niece/nephew, friend of the family, or kid next door build your website. Chances are good that your website doesn’t have a CSP. To check this out for sure, you should go to https://securityheaders.com and see if you have appropriate security headers for your website.

In my case, I found that my website security posture was unacceptably low. [Note: As a National Merit Scholar and Phi Beta Kappa member, anything below A+ is unacceptable.] Consequently, I looked into how I could get a better security posture. Apart from a few minor tweaks, my major problem was that I didn’t have a good CSP in place.

Don’t Just Turn On A Security Policy

Whether you code the security headers in your .htaccess file or you use software to generate the headers automatically, you will be tempted to just turn on a security policy. While that is a laudable sentiment, I urge you not to do this – unless your site is not live. Instead, make sure that you use your proposed CSP in “report only” mode – as a starting point.

Of course, I chose the engineer’s path and just set up a default-src directive to allow only local content. Realistically, I just wanted to see content blocked. So I activated my CSP in “blocking” mode (i.e., not “report only”) mode. And as expected, all sorts of content was blocked – including the fancy sliders that I had implemented on my front page.

I quickly reset the policy to “report only” so that I could address the plethora of problems. And this time, I worked each problem one at a time. Surprisingly, it really did take some time. I had to determine which features came from which external sources. I then had to add these sources to the CSP. This process was very much like ‘whitelisting’ external sources in an ad blocker. But once I found all of the external sources, I enabled “blocking” mode. This time, my website functioned properly.

Bottom Line

In the final analysis, I learned a few important things.

  1. Security headers are an effective means of informing client browsers about the characteristics of your content – and your content sources. Consequently, they are an excellent means of displaying your content whitelist to any potential customer.
  2. Few website builders automatically generate security headers. There is no “Great and Powerful Oz” who will code all of this from behind the curtains – unless you specifically pay someone to do it. Few hosting platforms do this by default.
  3. Tools do exist to help with coding security headers – and content security policies. In the case of Wrodpress, I used HTTP Headers (by Dimitar Ivanov).
  4. While no single security approach can solve all security issues, using security headers should be added to the quiver of tools that you use when addressing website content security.

Privacy 0.8 – My Never-ending Privacy Story

This Is The Song That Never Ends
This Is The Song That Never Ends

Privacy protection is not a state of being; it is not a quantum state that needs to be achieved. It is a mindset. It is a process. And that process is never-ending. Like the movie from the eighties, the never-ending privacy story features an inquisitive yet fearful child. [Yes, I’m casting each of us in the that role.] This child must assemble the forces of goodness to fight the forces of evil. [Yes, in this example, I’m casting the government and corporations in the role of evil doers. But bear with me. This is just story-telling.] The story will come to an end when the forces of evil and darkness are finally vanquished by the forces of goodness and light.

It’s too bad that life is not so simple.

My Never-ending Privacy Battle Begins

There is a tremendous battle going on. Selfish forces are seeking to strip us of our privacy while they sell us useless trinkets that we don’t need. There are a few people who truly know what is going on. But most folks only laugh whenever someone talks about “the great Nothing”. And then they see the clouds rolling in. Is it too late for them? Let’s hope not – because ‘they’ are us.

My privacy emphasis began a very long time ago. In fact, I’ve always been part of the security (and privacy) business. But my professional focus began with my first post-collegiate job. After graduation, I worked for the USAF on the Joint Cruise Missile program. My role was meager. In fact, I was doing budget spreadsheets using both Lotus 1-2-3 and the SAS FS-Calc program. A few years later, I remember when the first MIT PGP key server went online. But my current skirmishes with the forces of darkness started a few years ago. And last year, I got extremely serious about improving my privacy posture.

My gaze returned to privacy matters when I realized that my involvement on social media had invalidated any claims I could make about my privacy, I decided to return my gaze to the 800-pound gorilla in the room.

My Never-ending Privacy Battle Restarts

Since then, I’ve deleted almost all of my social media accounts. Gone are Facebook, Twitter, Instagram, Foursquare, and a laundry list of other platforms. I’ve deleted (or disabled) as many Google apps as I can from my Android phone (including Google Maps). I’ve started my new email service – though the long process of deleting my GMail accounts will not end for a few months.

At the same time, I am routinely using a VPN. And as I’ve noted before, I decided to use NordVPN. I have switched away from Chrome and I’m using Firefox exclusively. I’ve also settled upon the key extensions that I am using. And at this moment, I am using the Tor browser about half of the time that I’m online. Finally, I’ve begun the process of compartmentalizing my online activities. My first efforts were to use containers within Firefox. I then started to use application containers (like Docker) for a few of my key infrastructure elements. And recently I’ve started to use virtual guests as a means of limiting my online exposure.

Never-ending Progress

But none of this should be considered news. I’ve written about this in the past. Nevertheless, I’ve made some significant progress towards my annual privacy goals. In particular, I am continuing my move away from Windows and towards open source tools/platforms. In fact, this post will be the first time that I am publicly posting to my site from a virtual client. In fact, I am using a Linux guest for this post.

For some folks, this will be nothing terribly new. But for me, it marks a new high-water mark towards Windows elimination. As of yesterday, I access my email from Linux – not Windows. And I’m blogging on Linux – not Windows. I’ve hosted my Plex server on Linux – not Windows. So I think that I can be off of Windows by the end of 2Q19. And I will couple this with being off GMail by 4Q19.

Bottom Line

I see my goal on the visible horizon. I will meet my 2019 objectives. And if I’m lucky, I may even exceed them by finishing earlier than I originally expected. So what is the reward at the end of these goals? That’s simple. I get to set a new series of goals regarding my privacy.

At the beginning of this article, I said, “The story will come to an end when the forces of evil and darkness are finally vanquished by the forces of goodness and light.” But the truth is that the story will never end. There will always be individuals and groups who want to invade your privacy to advance their own personal (or collective) advantage. And the only way to combat this will be a never-ending privacy battle.

Secure File Transfer Ain’t So Easy

Secure File Sharing Ain't So Easy
Secure File Sharing Ain’t So Easy

For years, businesses and governments have used secure file transfer to send sensitive files across the Internet. Their methods included virtual private networks, secure encrypted file transfer (sftp and ftps), and transfers of secure / encrypted files. Today, the “gold standard” probably includes all three of these techniques simultaneously.

But personal file transfer has been quite different. Most people simply attach an un-encrypted file to an email message that is then sent across an un-encrypted email infrastructure. Sometimes, people place an un-encrypted file on a USB stick. These people perform a ‘secure file transfer’ by handing the USB stick to a known (and trusted) recipient. More recently, secure file transfers could be characterized by trusting a third-party data hosting provider. For many people, these kinds of transfers are secure enough.

Are Personal File Transfers Inherently Secure

These kinds of transfers are NOT inherently secure.

  • In the case of email transfers, the only ‘secure’ element might be a user/password combination on the sender or receiver’s mailbox. Hence, the data may be secure while at rest. But Internet email is completely insecure while in transit. Some enterprising people have exploited secure messages (by using tools like PGP/GPG). Others have secured their SMTP connections across a VPN – or an entirely private network. Unfortunately, email is notorious for being sent across numerous relays – any one of which could forward messages insecurely or even read un-encrypted messages. And there is very little validation performed on email metadata (e.g., no To: or From: field validation).
  • Placing a file on a USB stick is better than nothing. But there are a few key problems when using physical transfer. First, you have to trust the medium that is being used. And most USB devices can be picked up and whisked away without their absence even being noticed. Yes, you can use encryption to provide protection while the data is on the device. But most folks don’t do this. Second, even if the recipient treats the data with care, the data itself remains on an inherently mobile (and inherently less secure) medium.
  • Fortunately, modern users have learned not to use email and not to use physical media for secure file transfer. Instead, many people choose to use a cloud-based file hosting service. These services require logins to access the data. And some of these services even encrypt files while on their storage arrays. And if you’re really thorough when selecting your service provider, secure end-to-end transmission of your data may also be available. Of course, the weakest point of securing such transfers is the service provider. Because the data is at rest in their facility, they would have the availability to compromise the data. So this model requires trusting a third-party to protect your assets. Yes, this is just like a bank that protects your demand deposits. But if you aren’t paying for a trustworthy partner, then don’t be surprised if they find some other means to monetize you and your assets.
What Are The Characteristics of Secure File Transfers?

Secure file transfers should have the following characteristics:

  • The data being transferred should be encrypted by the originator and decrypted by the recipient.
  • Both the originator and the recipient should be authenticated before access is granted – either to the secure transport mechanism or to the data itself.
  • All data transfers must be secured from the originator to the recipient.
  • If possible, there should be no points of relay between the originator and the recipient OR there should be no requirements for a third-party to store and forward the complete message.
What Is Your Threat Model?

Are all of these characteristics required? The paranoid security analyst within me says, “Of course they are all required.” That same paranoid person would also add requirements concerning the strength of all of the ciphers that are to be used as well as the use of multi-factor authentication. But the requirements that you have should be driven by the threats that you are trying to mitigate – not by the coolest or most lauded technologies.

For most people, the threat that they are seeking to mitigate is one or more of the following: a) the seizure and exploitation of data by hackers, b) the seizure and exploitation of data by ruthless criminals and corporations, or c) the seizure and exploitation of data by an obsessive (and/or adversarial) governmental authority – whether foreign or domestic. Of course, some people are trying to protect against corporate espionage. Others are seeking to protect against hostile foreign actors. But for the sake of this discussion, I will be focusing upon the threat model encountered by typical Internet users.

Typical Threats For The Common American Family

While some of us do worry about national defense and corporate espionage, most folks are just trying to live their lives in obscurity – free to do the things that they enjoy and the things that they are called to do. They don’t want some opportunistic thief stealing their identity – and their family’s future. They don’t want some corporation using their browsing and purchasing habits in order to generate corporate ad revenue. And they don’t want a government that could obstruct their freedoms – even if it was meant in a benign (but just) cause.

So what does such a person need in a secure file transfer capability? First, they need file transfers to be encrypted – from their desk to the desk of the ultimate recipient. Second, they don’t want to “trust” any third-party to keep their data “safe”. Third, they want something that can use the Internet for transport – but do so in relative safety.

Enter Onionshare

It is rather complex to easily – and securely – share files across the Internet. It can be done easily – via email, ftp, and cloud servers. It can be done reasonably securely – via encrypted email, secure ftp, p2p (e.g., BitTorrent), and even secure cloud services. But all of these secure solutions are relatively difficult to implement. What is needed is a simple tool. Onionshare is just such a tool.

Onionshare was developed by Micah Lee in 2014. It is an application that sets up a hidden service on the TOR network. TOR is a multi-layered encryption and routing tool that was originally developed by the Department of the Navy. Today, it is the de facto reference implementation for secure, point-to-point connections across the Internet. And while it is not a strictly anonymous service, it offer a degree of anonymity that is well beyond the normal browsing experience. For a detailed description of Tor, take a look here. And for one of my first posts about TOR, look here.

Onionshare sets up a web server. It then establishes that server as a .onion service on the TOR network. The application then generates a page (and a URL) for that service. This URL points to a web page with the file(s) to be transferred. The person hosting the file(s) can then securely send the thoroughly randomized URL to the recipient. Once the recipient receives the URL, the recipient can download the file(s). After the secure file transfer is completed, the service is stopped – and the file(s) no longer available on TOR.

Drawbacks

This secure file transfer model has a few key weaknesses. First and foremost, the URL that is presented must be sent securely to the recipient. This can be done via secure email (e.g., ProtonMail to ProtonMail) or via secure IM (e.g., Signal). But if the URL is sent via insecure methods, the data could be potentially hijacked by a hostile actor. Second, there is no authentication that is performed when the ‘recipient’ connects to the .onion service. Whoever first opens that URL in a TOR browser can access (and later delete) the file(s). So the security of the URL link is absolutely paramount. But as there are no known mechanisms to index hidden .onion servers, this method is absolutely sufficient for most casual users who need to securely send sensitive data back-and-forth.

Bottom Line

If you want to securely send documents back-and-forth between yourself and other individuals, then Onionshare is a great tool. It works on Windows, MacOS, and a variety of Linux distros. And the only client requirement to use the temporary .onion server is a TOR-enabled browser. In short, this is about as ‘fire and forget’ as you could ever expect to find.

Reducing Threat Surface – Windows Minimization

Breaking the Cycle of Addiction
Let Go of the Past

Last year, my household quit cable TV. The transition wasn’t without its hiccups. But leaving cable has had some great benefits. First, we are paying less money per month. Second, we are watching less TV per month. Third, I have learned a whole lot of things about streaming technologies and about over-the-air (OTA) TV options. Last year was also the year that I put a home automation program into effect. But both of these initiatives were done in 2018. Now I’ve decided that security and Windows minimization will be the key household technology initiatives for 2019.

How Big Is Your Threat Surface?

What is “Windows minimization”? That is simple. “Windows minimization” is the intentional reduction of Windows instances within your organization. Microsoft Windows used to be the platform for innovation and commercialization. Now it is the platform for running legacy systems. Like mainframes and mini-computers before them, Windows is no longer the “go to” platform for new development. C# and .Net are no longer the environment for new applications. And SQL server never was the “go to” platform for most databases. And if you look at the total number of shipped operating systems, it is clear that Android and IOS have clearly become the only significant operating systems on the mobile platform.

Nevertheless, Microsoft products remain the most vulnerable operating system products (based upon the total number of published CVE alerts). Adobe remains the most vulnerable “application” product family. But these numbers only reflect the total number of “announced” vulnerabilities. They don’t take the total number of deployed or exploited systems into account. Based upon deployed instances, Android and iOS remain the most exploited platforms.

Microsoft’s vulnerable status isn’t because their products are inherently less safe. To be candid, all networked computing platforms are unsafe. But given the previous predominance of Windows, Microsoft technologies were the obvious target for most malware developers.

Of course, Windows dominance is no longer the case. Most people do the majority of their casual computing on their phones – which use either Linux (Android) or Unix (iOS). And while Microsoft’s Azure platform is a fine web/cloud platform, most cloud services use Linux and/or cloud services like OpenStack or AWS. So the demand for Windows is declining while the security of all other platforms is rapidly improving.

The Real Reason For Migrating

It is possible to harden your Windows systems. And it is possible to fail to harden your Linux systems. However, it is not possible to easily port a product from one OS to another – unless the software vendor did that for you already. In most cases, if the product you want isn’t on the platform that you use, then you either need to switch your operating platform or you need to convince your software supplier to support your platform.

Heading To The Tipping Point

It is for this reason that I have undertaken this Windows minimization project. New products are emerging every day. Most of them are not on Windows. They are on alternative platforms. Every day, I find a new widget that won’t run on Windows. Of course, I can always run a different operating system on a Windows-host.  But once the majority of my applications run on Linux, then it will make more sense to run a Linux-hosted vitualization platform and host a Windows guest system for the legacy apps.

And I am rapidly nearing that point. My Home Assistant runs on a Raspberry Pi. It has eleven application containers running within Docker (on HassOS). My DNS system runs on a Raspberry Pi. My OpenVPN system is hosted on a Pi.

Legacy Anchors

But a large number of legacy components remain on Windows. Cindy and I use Microsoft Office for general documents – though PDF documents from LibreOffice are starting to increase their share of total documents created. My podcasting platform (for my as yet unlaunched podcast) runs on Windows. And my Plex Media Server (PMS) runs on Windows.

Fortunately, PMS runs on Linux. So I built am Ubuntu 18.10 system to run on VirtualBox. And just as expected, it works flawlessly. Yes, I had to figure a few things out along the way – like using the right CIFS file system to access my NAS. But once I figured these minor tweaks out, I loaded all of my movies onto the new Plex server. I fully expect that once I transition my remaining apps, I’ll turn my Windows Server into an Ubuntu 18.04 LTS server.

Final Takeaways

I have taken my first steps. I’ve proven that Plex will run on Linux. I know that I can convert mobile print services from Windows to Linux. And I can certainly run miscellaneous apps (like TurboTax) on a Windows guest running on Linux. But I want to be sure before I convert my Windows server to Linux. So I will need to complete a software usage survey and build my data migration plan

I wonder how long it will be before I flip the switch – once and for all.

Riotous Babel or Collaborative Bazaar

Matrix: Decentralized and Secure Collaboration
Matrix: Decentralized and Secure Collaboration

Every group has their own collection of stories. In the Judeo-Christian world, the Tower of Babel is one such story. It has come to symbolize both the error of hubris and the reality of human disharmony. Within the open source community, the story of the Cathedral and the Bazaar (a.k.a., CatB) is another such story. It symbolizes the two competing schools of software development. These schools are: 1) the centralized management of software by a priestly class (i.e., the cathedral), and the decentralized competition found in the cacophonous bazaar. In the case of computer-based collaboration, it is hard to tell whether centralized overlords or a collaborative bazaar will eventually win.

Background

When I began my career, collaboration tools were intimate. You either discussed your thoughts over the telephone, you went to someone face-to-face, or you discussed the matter in a meeting . The sum total of tools available were the memorandum, the phone, and the meeting. Yes, the corporate world did have tools like PROFS and DISOSS. But both sets of tools were hamstrung either by their clumsiness (e.g., the computer “green screens”) or by the limitations of disconnected computer networks.

By the mid-eighties, there were dozens of corporate, academic, and public sector email systems. And there were just as many collaboration tools. Even the budding Internet had many different tools (e.g., sendmail, postfix, pine, elm).

The Riotous Babel

As my early career began to blossom (in the mid-nineties), I had the privilege of leading a bright team of professionals. Our fundamental mission was the elimination of corporate waste. And much of this waste came in the form of technological redundancy. So we consolidated from thirteen (13) different email clients to a single client. And we went from six (6) different email backbones to one backbone. At first, we chose to use proprietary tools to accomplish these consolidations. But over time, we moved towards more open protocols (like SMTP, X.500, and XMPP).

Since then, collaboration tools have moved from email and groupware tools (e.g., Lotus Notes) to web-based email and collaboration tools (e.g., Exchange and Confluence/Jira). Then the industry moved to “next-generation” web tools like Slack and even Discord. All of these “waves” of technology had one thing in common: they were managed by a centralized group of professionals who had arcane knowledge AND sufficient funding. Many of these tools relied upon open source components. But in almost every case, the total software solution had some “secret sauce” that ensured dominance through proprietary intellectual property.

The Times, They Are A Changing

Over the past few years, a new kind of collaboration tool has begun to emerge: the decentralized and loosely coupled system. The foremost tool of this kind is Matrix (and clients like Riot). In this model, messages flow between decentralized servers. Data sent between these servers is encrypted. And the set of data transferred between these servers is determined by the “interests” of local accounts/users. Currently, the directory for this network is centralized. There is a comprehensive ‘room’ directory at https://vector.im. But work is underway to build a truly decentralized authentication and directory system.

My Next Steps

One of the most exciting things about having a lab is that you get to experiment with new and innovative technologies. So when Franck Nijhof decided to add a Matrix server into the Hass.io Docker infrastructure, I leaped at the chance to experiment. So as of last night, I added a Matrix instance to my Home Assistant system. After a few hours, I am quite confident that we will see Matrix (or a similar tool) emerge as an important part of the next wave of IoT infrastructure. But until then, I am thrilled that I can blend my past and my future – and do it through a collaborative bazaar.

When Free (“as in VPN”) Isn’t Free!

Nothing of value is free
Nothing Of Value Is Free

The modern Internet is a dangerous place. [Note: It has always been ‘dangerous’. But now the dangers are readily apparent.] There are people and institutions that want to seize your private information and use it for their own advantages. You need look no further than Facebook (or China) to realize this simple fact. As a result of these assaults on privacy, many people are finally turning to VPN ‘providers’ as a means of improving their security posture. But free VPN services may not be so free.

Background

In the eighties, universities in the US (funded by the US federal government) and across the globe began to freely communicate – and to share the software that enabled these communications. This kind of collaboration helped to spur the development of the modern Internet. And in the nineties, free and open source software began to seize the imagination (and self-interest) of many corporations.

At that time, there were two schools of thought concerning free software: 1) The RMS school believed that software was totally free (“as in speech”) and should be treated as a community asset, and 2) The ESR school believed that open source was a technical means of accelerating the development of software and improving the quality of software. Both schools were founded upon the notion that free and open software was “‘free’ as in speech, not as in ‘beer’.” [Note: To get a good insight into the discussions of free software, I would encourage you to read The Cathedral and the Bazaar by Eric S. Raymond.]

While this debate raged, consumers had become accustomed to free and open software – when free meant “as in beer”. By using open source or shareware tools, people could get functional software without any licensing or purchasing fees. Some shareware developers nagged you for a contribution. Others just told you their story and let you install/use their product “as is”. So many computer consumers became junkies of the “free” stuff. [Insert analogies of drug dealers (or cigarette companies) freely distributing ‘samples’ of their wares in order to hook customers.]

VPN Services: The Modern Analog

Today, consumers still love ‘free stuff’. Whether this is ‘free’ games for their phones, ‘free’ email services for their families (or their businesses), or free security products (like free anti-virus and free anti-malware tools). And recently, free VPN services have begun to emerge. I first saw them delivered as a marketing tool. A few years ago, the Opera team bundled a fee VPN with their product in the hopes that people would switch from IE (or Firefox) to Opera.

But free VPN services are now available everywhere. You can log into the Apple Store or the Play Store and find dozens of free VPN offers. So when people heard that VPN services offer encryption and they saw that ‘vetted’ VPN services (i.e., apps/services listed in their vendor’s app store) were available for free, people began to exploit these free VPN services.

Who Pays When Free VPN Isn’t Free?

But let’s dig into this a little. Does anyone really believe that free VPN services (or software) are free (i.e., “as in beer”)? To answer this question, we need only look to historical examples. Both FOSS and shareware vendors leveraged the ‘junkie’ impulse. If they could get you to start using their product, they could lock you into their ecosystem – thus guaranteeing massive collateral purchases. But their only costs were their time – measured in the labor that they poured into developing and maintaining their products.

Today, VPN service providers also have to recoup the costs of their infrastructure. This includes massive network costs, replicated hardware costs, and substantial management costs. So someone has to overcome these massive costs. And this is done out of the goodness of their hearts? Hardly.

Only recently have we learned that free social media products are paid for through the resale of our own personal data. When things are ‘free’, we are the product being sold. So this fact begs the question: who is paying for this infrastructure when you aren’t paying for it?

Free – “As In ‘China'” – Paid For It

Recently, Top10VPN (a website operated by London-based Metric Labs Ltd) published a report about free VPN providers listed on the App Store and the Play Store. What they found is hardly surprising.

  • 59% of apps have links to China (17 apps)
  • 86% of apps had unacceptable privacy policies, issues include:
  • 55% of privacy policies were hosted in an amateur fashion Free WordPress sites with ad
  • 64% of apps had no dedicated website – several had no online presence beyond app store listings.
  • Over half (52%) of customer support emails were personal accounts, ie Gmail, Hotmail, Yahoo etc
  • 83% of app customer support email requests for assistance were ignored

Just because a VPN provider has sketchy operating practices or is somehow loosely associated with Chinese interests does not necessarily mean that the service is compromised. Nor does it mean that your identity has been (or will be) compromised. It does mean that you must double-check your free provider. And you need to know that free is never free. Know what costs your are bearing BEFORE you sign up for that free VPN.

William Chalk (published @ Hackernoon) may have said it best: “In allowing these opaque and unprofessional companies to host potentially dangerous apps in their stores, Google and Apple demonstrate a failure to properly vet the publishers utilizing their platform and curate the software promoted therein.” But resolution of these shortcomings is not up to Apple and Google. It is up to us. We must take action. First, we must tell Apple and Google just how disappointed we are with their product review processes. And then we must vote with our dollars – by using fee-based VPN’s. Why? Because free VPN may not ensure free speech.

**Full Disclosure: I am a paid subscriber of a fee-based VPN service. And at this time, I trust my provider. But even I double-checked my provider after reading this article.

2019 Resolution #2: Blocking Online Trackers

The Myth of Online Privacy
The Myth of Online Privacy
Background

Welcome to the New Year. This year could be a banner year in the fight to ensure our online privacy. Before now, the tools of surveillance have overwhelmed the tools of privacy. And the perceived need for new Internet content has outweighed the real difficulty of protecting your online privacy. For years, privacy advocates (including myself) have chanted the mantra of exploiting public key encryption. We have told people to use Tor or a commercial VPN. And we have told people to start using two-factor authentication. But we have downplayed the importance of blocking online trackers. Yes, security and privacy advocates did this for themselves. But most did not routinely recommend this as a first step in protecting the privacy of our clients.

But the times are changing.

Last year (2018) was a pivotal time in the struggle between surveillance and privacy. The constant reporting of online hacks has risen to a deafening roar. And worse still, we saw the shepherds of our ‘trusted platforms’ go under the microscope. Whether it was Sundar Pichai of Google or Mark Zuckerberg of Facebook, we have seen tech leaders (and their technologies) revealed as base – and ultimately self-serving. Until last year, few of us realized that if we don’t pay for a service, then we are the product that the service owners are selling. But our eyes have now been pried open.

Encryption Is Necessary

Security professionals were right to trumpet the need for encryption. Whether you are sending an email to your grandmother or inquiring about the financial assets that you’ve placed into a banker’s hands, it is not safe to send anything in clear text. Think of it this way. Would you put your tax filing on a postcard so that the mail man – and every person and camera between you and the IRS – could see your financial details? Of course you wouldn’t. You’d seal it in an envelope. You might even hand deliver it to an IRS office. Or more recently, you might send your return electronically – with security protections in place to protect key details of your financial details.

But these kinds of protections are only partial steps. Yes, your information is secure from when it leaves your hands to when it enters the hands of the intended recipient. But what happens when the recipient gets your package of information?

Encryption Is Not Enough

Do the recipients just have your ‘package’ of data or do they have more? As all of us have learned, they most certainly have far more information. Yes, our ISP (i.e., the mail man) has no idea about the message. But what happens when the recipient at the other end of the pipe gets your envelope? They see the postmarks. They see the address. But they could also lift fingerprints from the envelope. And they can use this data. At the same time, by revealing your identity, you have provided the recipient with critical data that could be used to profile you, your friends and family, and even your purchasing habits.

So your safety hinges upon whether you trust the recipients to not disclose key personal information. But here’s the rub. You’ve made a contract with the recipient whereby they can use any and all of your personally identifiable information (PII) for any purpose that they choose. And as we have learned, many companies use this information in hideous way.

Resist Becoming The Product

This will be hard for many people to hear: If you’re not paying for a service, then you shouldn’t be surprised when the service provider monetizes any and all information that you have willingly shared with them. GMail is a great service – paid for with you, your metadata, and every bit of content that you put into your messages. Facebook is phenomenal. But don’t be surprised when MarkeyZ sells you out.

Because of the lessons that I’ve learned in 2018, I’m starting a renewed push towards improving my privacy. Up until now, I’ve focused on security. I’ve used a commercial VPN and/or Tor to protect myself from ISP eavesdropping. I’ve built VPN servers for all of my clients. I’ve implemented two-factor authentication for as many of my logons as my service providers will support.

Crank It Up To Eleven

And now I have to step up my game.

  1. I must delete all of my social media accounts. That will be fairly simple as I’ve already gotten rid of Facebook/Instagram, Pinterest, and Twitter. Just a few more to go. I’m still debating about LinkedIn. I do pay for premium services. But I also know that Microsoft is selling my identity. For the moment, I will keep LinkedIn as it is my best vehicle for professional interactions.
  2. I may add a Facebook account for the business. Since many customers are on Facebook, I don’t want to abandon potential customers. But I will strictly separate my public business identity/presence from my personal identity/presence.
  3. I need to get off of Gmail. This one will be tougher than the first item. Most of my contacts know me from my GMail address (which I’ve used for over fifteen years). But I’ve already created two new email addresses (one for the business and one on ProtonMail). My current plan is to move completely off of GMail by the end of 1Q19.
  4. I am going to exclusively use secure browsing for almost everything. I’ve used ad-blockers for both the browser and for DNS. And I’ve used specific Firefox extensions for almost all other browsing activities that I have done. I will now try and exclusively use the Tor Browser on a virtual machine (i.e., Whonix) and implement NoScript wherever I use that browser. Let’s hope that these things will really reduce my vulnerability on the Internet. I suspect that I will find some sites that just won’t work with Tor (or with NoScript). When I find such sites, I’ll have to intentionally choose whether to use the site unprotected or set up a sandbox (and virtual identities) whenever I use these sites. Either way, I will run such sites from a VM – just to limit my exposure.
  5. I will block online trackers by default. Firefox helps. NoScript also helps. But I will start routinely using Privacy Badger and uMatrix as well.
Bottom Line

In the final analysis, I am sure that there are some compromises that I will need to make. Changing my posture from trust to distrust and blocking all online trackers will be the hardest – and most rewarding – step that I can make towards protecting my privacy.

The Ascension of the Ethical Hacker

Hacker: The New Security Professional

Over the past year, I have seen thousands of Internet ads about obtaining an ‘ethical hacker’ certification. These ads (and the associated certifications) have been around for years. But I think that the notoriety of “Mr. Robot” has added sexiness (and legitimacy) to the title “Certified Ethical Hacker”. But what is an ‘ethical hacker’?

According to Dictionary.com, an ethical hacker is, “…a person who hacks into a computer network in order to test or evaluate its security, rather than with malicious or criminal intent.” Wikipedia has a much more comprehensive definition. But every definition revolves around taking an illegitimate activity (i.e., computer hacking) and making it honorable.

The History of Hacking

This tendency to lionize hacking began when Matthew Broderick fought against the WOPR in “WarGames”.  And the trend continued in the early nineties with the Robert Redford classic, “Sneakers”. In the late nineties, we saw Keanu Reeves as Neo (in “The Matrix”) and Gene Hackman as Edward Lyle (in “Enemy of the State”). But the hacker hero worship has been around for as long as there have been computers to hate (e.g., “Colossus: The Forbin Project”).

But as computer hacking has become routine (e.g., see “The Greatest Computer Hacks” on Lifewire), everyday Americans are now aware of their status as “targets” of attacks.  Consequently, most corporations are accelerating their investment in security – and in vulnerability assessments conducted by “Certified Ethical Hackers”.

So You Wanna Be A White Hat? Start Small

Increased corporate attacks result in increased corporate spending. And increased spending means that there is an ‘opportunity’ for industrious technicians. For most individuals, the cost of getting ‘certified’ (for CISSP and/or CEH) is out of reach. At a corporate scale, ~$15K for classes and a test is not very much to pay. But for gig workers, it is quite an investment. So can you start learning on your own?

Yes, you can start learning on your own. In fact, there are lots of ways to start learning. You could buy books. Or you could start learning by doing. This past weekend, I decided to up my game. I’ve done security architecture, design, and development for a number of years. But my focus has always been on intruder detection and threat mitigation.  It was obvious that I needed to learn a whole lot more about vulnerability assessment. But where would I start?

My starting point was to spin up a number of new virtual systems where I could test attacks and defenses. In the past, I would just walk into the lab and fire up some virtual machines on some of the lab systems. But now that I am flying solo, I’ve decided to do this the same way that hackers might do it: by using whatever I had at hand.

The first step was to set up VirtualBox on one of my systems/servers. Since I’ve done that before, it was no problem setting things up again. My only problem was that I did not have VT-x enabled on my motherboard. Once I did that, things started to move rather quickly.

Then I had to start downloading (and building) appropriate OS images. My first test platform was Tails. Tails is a privacy centered system that can be booted from a USB stick. My second platform was a Kali Linux instance. Kali is a fantastic pen testing platform – principally because it includes a Metasploit infrastructure. I even decided to start building some attack targets. Right now, I have a VM for Raspbian (Linux on the Raspberry Pi), a VM for Debian Linux, one for Red Hat Linux, and a few for Windows targets. Now that the infrastructure is built, I can begin the learning process.

Bottom Line

If you want to be an ethical hacker (or understand the methods of any hacker), then you can start without going to a class. Yes, it will be more difficult to learn by yourself. But it will be far less expensive – and far more memorable. Remember, you can always take the class later.

Do You Need A Residential Data Hub?

Data is essential for effective decision-making - even at home.
Residential Data Hubs: A Necessary Element @ Home

With more and more devices running in every home, it is becoming increasingly important to collect and manage all of the data that is available. Most people have no idea just how much data is currently being collected in their homes. But as the future arrives, almost every home will need to aggregate and assess data in order to make informed decisions and take informed actions. When that time arrives for you, you will need a “plug and play” residential data hub. Such devices will become an instrumental part of transforming your household into an efficient information processing system.

Currently, data is collected on your utility usage (e.g., electricity, water, Internet data usage, thermostat settings, etc). But few people realize that future homes will be collecting enormous amounts of data. We (at the Olsen residence and at Lobo Strategies) have exploited many of the new technologies that are part of the Internet of Things (IoT). Through this experience, it is apparent just how much data is now available. We are collecting data about where out family and team members are located. We are collecting data on the physical environment throughout our buildings – including temperature and occupancy. We are collecting information on the internal and external network resources being used by “the team.” And the amount of data being collected today will be dwarfed by the amount data that will be collected in the next few years.

The Necessity Of Residential Data Hubs

Over the past six months, we have been assembling a huge portfolio of data sources.

  • We use our DNS server logs and firewall logs to collects access-related data.
  • The Home Assistant platform collects data about all of our IoT devices.  [Note: In the past month, we’ve begun consolidating all of our IoT data into a TICK platform.]
  • Starting this week, we are now using router data to optimize bandwidth consumption.

While it is possible to manage each of these sources, it is taking quite a bit of “integration” (measured in many labor hours) to assemble and analyze this data. But we are now taking steps to assemble all of this data for easy analysis and decision-making

Consolidating Router Data

Our ISP put us in a box: they offered us an Internet “data only” package at a seriously reduced price. But buried within the contract were express limits on bandwidth.  [Note: Our recent experience has taught us that our current ISP is not a partner; they are simply a service provider. Indeed, we have learned that we will treat them as such in the future.] Due to their onerous actions, we are now on a needed content diet. And as of the beginning of the week, we have taken the needed steps to stay within the “hidden” limits that our ISP imposed.

Fortunately, our network architect (i.e., our beloved CTO) found the root cause of our excessive usage. He noted the recent changes approved by the premise CAB (i.e., our CTO’s beloved wife). And then he correlated this with the DNS log data that identified a likely source of our excess usage. This solved the immediate problem. But what about the irreversible corrective action?

And as of yesterday, we’ve also taken the steps needed for ongoing traffic analysis.

  1. We’ve exploited our premise network decisions. We normally use residential-grade equipment in our remote locations. In candor, the hardware is comparable to its pricier, enterprise brethren. But the software has always suffered. Fortunately, we’ve used DD-WRT in any premise location. By doing this, we had a platform that we could build upon.
  2. The network team deployed remote access tools (i.e., ssh and samba) to all of our premise routers.
  3. A solid-state disk drive was formatted and then added to the router’s USB 3.0 port. [Note: We decided to use a non-journaled filesystem to limit excessive read/writes of the journal itself.]
  4. Once the hardware was installed, we deployed YAMon on the premise router.
  5. After configuring the router and YAMon software, we began long-term data collection.

Next Steps

While the new network data collection is very necessary, it is not a solution to the larger problem. Specifically, it is adding yet another data source (i.e., YADS). So what is now needed is a real nexus for all of the disparate data sources. We truly need a residential data hub. I need to stitch together the DNS data, the router data, and the IoT data into a single, consolidated system with robust out-of-the-box analysis tools.  

I wonder if it is time to build just such a tool – as well as launch the services that go along with the product.