OSINT Techniques Are Often Necessary

OSINT sources are legion.

Over the past two quarters, we’ve focused upon the technologies and practices that help to establish (and maintain) an effective privacy posture. We’ve recommended ceasing almost all personal activity on social media. But the work of ensuring personal privacy cannot end there. Our adversaries are numerous – and they counter every defensive action that we take with increasingly devastating offensive tools and techniques. While the tools of data capture are proliferating, so are the tools for data analysis. Using open source intelligence (OSINT) tools, it is possible to transform vast piles of data into meaningful and actionable chunks of information. For this reason, our company has extended its security and privacy focus to include the understanding and the use of OSINT techniques.

Start At the Beginning

For countless generations, a partner was someone that you knew. You met them. You could shake their hand. You could see their smiling face. You knew what they could do. And you probably even knew how they did it. In short, you could develop a trust-based relationship that would be founded upon mutual knowledge and relative proximity. It is no coincidence that our spouses are also known as our ‘partners‘ as we can be honest and forthcoming about our goals and desires with them. We can equitably (and even happily) share the burdens that will help us to achieve our shared goals.

But that kind of relationship is no longer the norm in modern business. Most of our partners (and providers) work with us from behind a phone or within a computer screen. We may know their work product. But we have about as much of a relationship with them as we do with those civil servants who work at the DMV.

So how can we know if we should trust an unknown partner?

A good privacy policy is an essential starting point in any relationship. But before we partner with anyone, we should know exactly how they will use any data that we share with them. So our first rule is simple: before sharing anything, we must ensure the existence of (and adherence to) a good privacy policy. No policy? No partnership. Simple, huh?

That sounds all well and good. But do you realize just how much data you share without your knowledge or explicit consent? If you want to really know the truth, read the end user license agreements (EULA’s) from your providers. What you will usually find is a blanket authorization for them to use any and all data that is provided to them. This certainly includes names, physical addresses, email addresses, birth dates, mothers’ maiden names, and a variety of other data points. If you don’t believe me (or you don’t read the EULA documents which you probably click past), then just use a search engine and enter your name in the search window. There will probably be hundreds of records that pertain to you.

But if you really want to open your eyes, just dig a little deeper to find that every government document pertaining to you is a public record. And all public records are publicly indexed. So every time that you pass a toll and use your electronic pass, your location (and velocity) data is collected. And every time that you use a credit card is logged.

Know the difference between a partner and a provider!

A partner is someone that you trust. A provider is someone that provides something to/for you. Too often, we treat providers as if they were partners. If you don’t believe that, then answer this simple question: Is Facebook a partner in your online universe? Or are they just someone who seeks to use you for their click bait (and revenue)?

A partner is also someone that you know. If you don’t know them, they are not a partner. If you don’t implicitly trust them, then why are you sharing so much of your life with them?

Investigate And Evaluate Every Potential Partner!

If you really need a partner to work with and you don’t already trust someone to do the work, then how do you determine whether someone is worth trusting? I would tell you to use the words of former President Ronald Reagan as a guide: trust but verify. And how do you verify a potential partner? You learn about them. You investigate them. You speak with people that know them. In short, you let their past actions be a guide to how they will make future decisions. And for the casual investigation, you should probably start using OSINT techniques to assess your partner candidates.

What are OSINT techniques?

According to the SecurityTrails blog, “Open source intelligence (OSINT) is information collected from public sources such as those available on the Internet, although the term isn’t strictly limited to the internet, but rather means all publicly available sources.” The key is that OSINT is comprised of readily available intelligence data. So sites like SecurityTrails and Michael Bazzell’s IntelTechniques are fantastic sources for tools and techniques that can collect immense volumes of OSINT data and then reduce it into usable information.

So what is the cost of entry?

OSINT techniques can be used with little to no cost. As a security researcher, you need a reasonable laptop (with sufficient memory) in order to use tools like Maltego. And most of the OSINT tools can run either on Kali Linux or on Buscador (see below). And while some sources of data are free, some of the best sources do require an active subscription to access their data. And the software is almost always open source (and hence readily available). So for a few hundred dollars, you can start doing some pretty sophisticated OSINT investigations.

Protection Against OSINT Investigations

OSINT techniques are amazing – when you use them to conduct an investigation. But they can be positively terrifying when you are the subject of such an investigation. So how can you limit your exposure from potential OSINT investigations?

One of the simplest steps that you can take is to use an operating system designed to protect your privacy. As noted previously, we recommend the use of Linux as a foundation. Further, we recommend using Qubes OS for most of your public ‘surfing’ needs. [We also recommend TAILS on a USB key whenever you are using communal computers.]

Using OSINT To Determine Your Personal Risk

While you can minimize your future exposure to investigations, you first need to determine just how long of a shadow your currently cast. The best means of assessing that shadow is to use OSINT tools and techniques to assess yourself. A simple Google search told me a lot about my career. Of course much of his was easily culled from LinkedIn. But it was nice to see that a simple name search highlighted important (and positive) things that I’ve accomplished.

And then I started to use Maltego to find out about myself. I won’t go into too much detail. But the information that I could easily unearth was altogether startling. For example, I easily found out about past property holdings – and past legal entanglements related to a family member. There was nothing too fancy in my recorded past. While that fact alone was a little discouraging, I was able to find all of these things with little or no effort.

I had hoped that discovering this stuff would be like the efforts which my wife took to unearth our ancestral heritage: difficult and time-consuming. But it wasn’t. I’m sure that it would take some serious digging to find anything that is intentionally hidden. But it takes little or no effort to find out some privileged information. And the keys to unlocking these doors are the simple pieces of data that we so easily share.

Clean Up Your Breadcrumbs

Like the little children in the fairy tale, a trail of breadcrumbs can be followed. So if you want to be immune from casual and superficial searches, then you need to take the information that is casually available and start to clean it up. With each catalogued disclosure, you can contact the data source and request that this data be obscured and not disclosed. With enough diligence, it is possible to clean up the info that you’ve casually strewn in your online wake. And if the task seems altogether too daunting, there are companies (and individuals) who will gladly assist you in your efforts to minimize your online footprints.

Bottom Line

As we use the internet, we invariably drop all sorts of breadcrumbs. And these breadcrumbs can be used for many things. On the innocuous end of the scale, vendors can target you with ads that you don’t want to see. But at the other end of the scale is the opportunity to leverage your past in order to redirect your future. It sounds innocuous when stated like that. So let’s call a spade a spade. There is plenty of information that can be used for kidnapping your data and for “influencing” (i.e., extorting) you. But if you use OSINT techniques to your advantage, then you can identify your risks and you can limit your vulnerabilities. And the good news is that it will only cost you a few shekels – while doing nothing could cost you thousands of shekels.

Is Transitive Trust A Worthwhile Gamble?

When I started to manage Windows systems, it was important to understand the definition of ‘transitive trust’. For those not familiar with the technical term, here is the ‘classic’ definition:

Transitive trust is a two-way relationship automatically created between parent and child domains in a Microsoft Active Directory forest. When a new domain is created, it shares resources with its parent domain by default, enabling an authenticated user to access resources in both the child and parent.

But this dry definition misses the real point. A transitive trust relationship (of any kind) is a relationship where you trust some ‘third-party’ because someone that you do trust also trusts that same ‘third-party’. This definition is also rather dry. But let’s look at an example. My customers (hopefully) trust me. And if they trust me enough, then they also trust my choices concerning other groups that help me to deliver my services to them. In short, they transitively trust my provider network because they trust me.

That all sounds fine. But what happens if your suppliers break your trust? Should your customers stop trusting you? Recently, this very situation occurred between Capital One, their customers, and some third-party technology providers (like Amazon and their AWS platform).

Trust: Hard to Earn – Easy to Lose

Unfortunately, the Amazon AWS technology platform was compromised. So Capital One should legitimately stop trusting Amazon (and its AWS platform). This should remain true until Amazon verifiably addresses the fundamental causes of this disastrous breach. But what should Capital One’s customers do? [Note: I must disclose that I am a Capital One customer. Therefore, I may be one of their disgruntled customers.]

Most people will blame Capital One. Some will blame them for a lack of technical competence. And that is reasonable as Capital One is reaping financial benefits from their customers and from their supplier network. Many other people will blame the hacker(s). It’s hard not to fume when you realize that base individuals are willing to take advantage of you solely for their own benefit. Unfortunately, only a few people will realize that the problem is far more vexing.

Fundamentally, Capital One trusted a third-party to deliver services that are intrinsic to their core business. Specifically, Capital One offered a trust relationship to their customers. And their customers accepted that offer. Then Capital One chose to use an external platform simply to cut corners and/or deliver features that they were unable to deliver on their own. And apparently that third-party was less capable than Capital One assumed.

Regaining Trust

When a friend or colleague breaks your trust, you are wounded. And in addition to this emotional response, you probably take stock of continuing that relationship. You undoubtedly perform and internal risk/reward calculation. And then you add the emotional element about whether this person would act in a more trustworthy fashion in the future. If our relationship with companies was less intimate, then most people would simply jettison their failed provider. But since we build relationships on a more personal footing, most people will want to give their friend (or their friendly neighborhood Bailey Building & Loan) the benefit of the doubt.

So what should Capital One do? First, they must accept responsibility for their error in judgment. Second, they must pay for the damages that they have caused. [Note: Behind the scenes, they must bring the hammer to their supplier.] Third, they must rigorously assess what really led to these problems. And fourth, they must take positive (and irreversible) steps to resolve the root cause of this matter.

Of course, the last piece is the hardest. Oftentimes, the root cause is difficult to sort out given all of the silt that was stirred upon in the delta when the hurricane passed through. Some people will blame the Capital One culture. And there is merit to this charge. After all, the company did trust others to protect the assets of their customers. As a bank, the fundamental job is to protect customer assets. And only when that is done, should the bank owners use the entrusted funds in order to generate a shared profit for their owners (i.e., shareholders) and their customers.

Trust – But Verify

In the height of the Cold War, President Ronald Reagan exhorted the nation to trust – but then to verify the claims of a long-standing adversary. In the case of Capital One, we should do the very same thing. We should trust them to act in their own selfish interests because the achievement of our interests will be the only way that they can achieve their own interests.

That means that we must be part of a robust and two-way dialog with Capital One and their leadership. Will Capital One be big enough to do this? That’s hard to say. But if they don’t, they will never be able to buy back our trust.

Finally, we have to be bold enough to seek verification. As President Reagan said, “You can’t just say ‘trust me’. Trust must be earned.”

Long Past Time For Good Security Headers

HTTP Security Headers Status
The State of HTTP Security Headers

Over the past few months, I’ve focused my attention upon how you can be safer while browsing the Internet. One of the most important recommendations that I have made is for you to reduce (or eliminate) the loading and execution of unsafe content. So I’ve recommended ad blockers, a plethora of browser add-ons, and even the hardening of your premise-based services (e.g., routers, NAS systems, IoT devices, and DNS). Of course, this only addresses one side of the equation (i.e., the demand side). In order to improve the ‘total experience’ for your customers, you will also need to harden the services that you provide (i.e., the supply side). And one of the most often overlooked mechanisms for improvement is the proper use of HTTP security headers.

Background

According to the Open Web Application Security Project (OWASP), content injection is still the single largest class of vulnerabilities that content providers must address. When coupled with cross-site scripting (XSS), it is clear that hostile content poses an existential threat to many organizations. Yes, consumers must block all untrusted content as it arrives at their browser. But every site owner should first ensure that they inform every client about the content that they will be sending. Once these declarations are made, the client (i,e, browser) can then act to trust or distrust the content that they receive.

The notion that a web site should declare the key characteristics of its content stream is nothing new. What we now call a content security policy (CSP) has been around for a very long time. Indeed, the fundamental descriptions of content security policies were discussed as early as 2004. And the first version of the CSP standard was published back in 2012.

CSP Standards Exist – But Are Not Universally Used

According to the White Hat 2018 “Website Security Statistics Report”, a number of industries still operate chronically vulnerable websites. White Hat estimates that 52% of Accommodations / Food Services web sites are “Always Vulnerable”. Moreover, an additional 7% of these websites are “Frequently Vulnerable” (ie., vulnerable for at least 263 days a year). Of course, that is the finding for one sector of the broader marketplace. But things are just as bad elsewhere. In the healthcare market, 50% of websites are considered “Always Vulnerable” with an additional 10% classified as “Frequently Vulnerable”.

Unfortunately, few websites actually use one of the most potent elements in their arsenal. Most website operators have established software upgrade procedures. And a large number of them have acceptable auditing and reporting procedures. But unless they are subject to regulatory scrutiny, few organizations have even considered implementing a real CSP.

Where To Start

So let’s assume that you run a small business. And you had your daughter/son, niece/nephew, friend of the family, or kid next door build your website. Chances are good that your website doesn’t have a CSP. To check this out for sure, you should go to https://securityheaders.com and see if you have appropriate security headers for your website.

In my case, I found that my website security posture was unacceptably low. [Note: As a National Merit Scholar and Phi Beta Kappa member, anything below A+ is unacceptable.] Consequently, I looked into how I could get a better security posture. Apart from a few minor tweaks, my major problem was that I didn’t have a good CSP in place.

Don’t Just Turn On A Security Policy

Whether you code the security headers in your .htaccess file or you use software to generate the headers automatically, you will be tempted to just turn on a security policy. While that is a laudable sentiment, I urge you not to do this – unless your site is not live. Instead, make sure that you use your proposed CSP in “report only” mode – as a starting point.

Of course, I chose the engineer’s path and just set up a default-src directive to allow only local content. Realistically, I just wanted to see content blocked. So I activated my CSP in “blocking” mode (i.e., not “report only”) mode. And as expected, all sorts of content was blocked – including the fancy sliders that I had implemented on my front page.

I quickly reset the policy to “report only” so that I could address the plethora of problems. And this time, I worked each problem one at a time. Surprisingly, it really did take some time. I had to determine which features came from which external sources. I then had to add these sources to the CSP. This process was very much like ‘whitelisting’ external sources in an ad blocker. But once I found all of the external sources, I enabled “blocking” mode. This time, my website functioned properly.

Bottom Line

In the final analysis, I learned a few important things.

  1. Security headers are an effective means of informing client browsers about the characteristics of your content – and your content sources. Consequently, they are an excellent means of displaying your content whitelist to any potential customer.
  2. Few website builders automatically generate security headers. There is no “Great and Powerful Oz” who will code all of this from behind the curtains – unless you specifically pay someone to do it. Few hosting platforms do this by default.
  3. Tools do exist to help with coding security headers – and content security policies. In the case of Wrodpress, I used HTTP Headers (by Dimitar Ivanov).
  4. While no single security approach can solve all security issues, using security headers should be added to the quiver of tools that you use when addressing website content security.

Privacy 0.8 – My Never-ending Privacy Story

This Is The Song That Never Ends
This Is The Song That Never Ends

Privacy protection is not a state of being; it is not a quantum state that needs to be achieved. It is a mindset. It is a process. And that process is never-ending. Like the movie from the eighties, the never-ending privacy story features an inquisitive yet fearful child. [Yes, I’m casting each of us in the that role.] This child must assemble the forces of goodness to fight the forces of evil. [Yes, in this example, I’m casting the government and corporations in the role of evil doers. But bear with me. This is just story-telling.] The story will come to an end when the forces of evil and darkness are finally vanquished by the forces of goodness and light.

It’s too bad that life is not so simple.

My Never-ending Privacy Battle Begins

There is a tremendous battle going on. Selfish forces are seeking to strip us of our privacy while they sell us useless trinkets that we don’t need. There are a few people who truly know what is going on. But most folks only laugh whenever someone talks about “the great Nothing”. And then they see the clouds rolling in. Is it too late for them? Let’s hope not – because ‘they’ are us.

My privacy emphasis began a very long time ago. In fact, I’ve always been part of the security (and privacy) business. But my professional focus began with my first post-collegiate job. After graduation, I worked for the USAF on the Joint Cruise Missile program. My role was meager. In fact, I was doing budget spreadsheets using both Lotus 1-2-3 and the SAS FS-Calc program. A few years later, I remember when the first MIT PGP key server went online. But my current skirmishes with the forces of darkness started a few years ago. And last year, I got extremely serious about improving my privacy posture.

My gaze returned to privacy matters when I realized that my involvement on social media had invalidated any claims I could make about my privacy, I decided to return my gaze to the 800-pound gorilla in the room.

My Never-ending Privacy Battle Restarts

Since then, I’ve deleted almost all of my social media accounts. Gone are Facebook, Twitter, Instagram, Foursquare, and a laundry list of other platforms. I’ve deleted (or disabled) as many Google apps as I can from my Android phone (including Google Maps). I’ve started my new email service – though the long process of deleting my GMail accounts will not end for a few months.

At the same time, I am routinely using a VPN. And as I’ve noted before, I decided to use NordVPN. I have switched away from Chrome and I’m using Firefox exclusively. I’ve also settled upon the key extensions that I am using. And at this moment, I am using the Tor browser about half of the time that I’m online. Finally, I’ve begun the process of compartmentalizing my online activities. My first efforts were to use containers within Firefox. I then started to use application containers (like Docker) for a few of my key infrastructure elements. And recently I’ve started to use virtual guests as a means of limiting my online exposure.

Never-ending Progress

But none of this should be considered news. I’ve written about this in the past. Nevertheless, I’ve made some significant progress towards my annual privacy goals. In particular, I am continuing my move away from Windows and towards open source tools/platforms. In fact, this post will be the first time that I am publicly posting to my site from a virtual client. In fact, I am using a Linux guest for this post.

For some folks, this will be nothing terribly new. But for me, it marks a new high-water mark towards Windows elimination. As of yesterday, I access my email from Linux – not Windows. And I’m blogging on Linux – not Windows. I’ve hosted my Plex server on Linux – not Windows. So I think that I can be off of Windows by the end of 2Q19. And I will couple this with being off GMail by 4Q19.

Bottom Line

I see my goal on the visible horizon. I will meet my 2019 objectives. And if I’m lucky, I may even exceed them by finishing earlier than I originally expected. So what is the reward at the end of these goals? That’s simple. I get to set a new series of goals regarding my privacy.

At the beginning of this article, I said, “The story will come to an end when the forces of evil and darkness are finally vanquished by the forces of goodness and light.” But the truth is that the story will never end. There will always be individuals and groups who want to invade your privacy to advance their own personal (or collective) advantage. And the only way to combat this will be a never-ending privacy battle.

Secure File Transfer Ain’t So Easy

Secure File Sharing Ain't So Easy
Secure File Sharing Ain’t So Easy

For years, businesses and governments have used secure file transfer to send sensitive files across the Internet. Their methods included virtual private networks, secure encrypted file transfer (sftp and ftps), and transfers of secure / encrypted files. Today, the “gold standard” probably includes all three of these techniques simultaneously.

But personal file transfer has been quite different. Most people simply attach an un-encrypted file to an email message that is then sent across an un-encrypted email infrastructure. Sometimes, people place an un-encrypted file on a USB stick. These people perform a ‘secure file transfer’ by handing the USB stick to a known (and trusted) recipient. More recently, secure file transfers could be characterized by trusting a third-party data hosting provider. For many people, these kinds of transfers are secure enough.

Are Personal File Transfers Inherently Secure

These kinds of transfers are NOT inherently secure.

  • In the case of email transfers, the only ‘secure’ element might be a user/password combination on the sender or receiver’s mailbox. Hence, the data may be secure while at rest. But Internet email is completely insecure while in transit. Some enterprising people have exploited secure messages (by using tools like PGP/GPG). Others have secured their SMTP connections across a VPN – or an entirely private network. Unfortunately, email is notorious for being sent across numerous relays – any one of which could forward messages insecurely or even read un-encrypted messages. And there is very little validation performed on email metadata (e.g., no To: or From: field validation).
  • Placing a file on a USB stick is better than nothing. But there are a few key problems when using physical transfer. First, you have to trust the medium that is being used. And most USB devices can be picked up and whisked away without their absence even being noticed. Yes, you can use encryption to provide protection while the data is on the device. But most folks don’t do this. Second, even if the recipient treats the data with care, the data itself remains on an inherently mobile (and inherently less secure) medium.
  • Fortunately, modern users have learned not to use email and not to use physical media for secure file transfer. Instead, many people choose to use a cloud-based file hosting service. These services require logins to access the data. And some of these services even encrypt files while on their storage arrays. And if you’re really thorough when selecting your service provider, secure end-to-end transmission of your data may also be available. Of course, the weakest point of securing such transfers is the service provider. Because the data is at rest in their facility, they would have the availability to compromise the data. So this model requires trusting a third-party to protect your assets. Yes, this is just like a bank that protects your demand deposits. But if you aren’t paying for a trustworthy partner, then don’t be surprised if they find some other means to monetize you and your assets.
What Are The Characteristics of Secure File Transfers?

Secure file transfers should have the following characteristics:

  • The data being transferred should be encrypted by the originator and decrypted by the recipient.
  • Both the originator and the recipient should be authenticated before access is granted – either to the secure transport mechanism or to the data itself.
  • All data transfers must be secured from the originator to the recipient.
  • If possible, there should be no points of relay between the originator and the recipient OR there should be no requirements for a third-party to store and forward the complete message.
What Is Your Threat Model?

Are all of these characteristics required? The paranoid security analyst within me says, “Of course they are all required.” That same paranoid person would also add requirements concerning the strength of all of the ciphers that are to be used as well as the use of multi-factor authentication. But the requirements that you have should be driven by the threats that you are trying to mitigate – not by the coolest or most lauded technologies.

For most people, the threat that they are seeking to mitigate is one or more of the following: a) the seizure and exploitation of data by hackers, b) the seizure and exploitation of data by ruthless criminals and corporations, or c) the seizure and exploitation of data by an obsessive (and/or adversarial) governmental authority – whether foreign or domestic. Of course, some people are trying to protect against corporate espionage. Others are seeking to protect against hostile foreign actors. But for the sake of this discussion, I will be focusing upon the threat model encountered by typical Internet users.

Typical Threats For The Common American Family

While some of us do worry about national defense and corporate espionage, most folks are just trying to live their lives in obscurity – free to do the things that they enjoy and the things that they are called to do. They don’t want some opportunistic thief stealing their identity – and their family’s future. They don’t want some corporation using their browsing and purchasing habits in order to generate corporate ad revenue. And they don’t want a government that could obstruct their freedoms – even if it was meant in a benign (but just) cause.

So what does such a person need in a secure file transfer capability? First, they need file transfers to be encrypted – from their desk to the desk of the ultimate recipient. Second, they don’t want to “trust” any third-party to keep their data “safe”. Third, they want something that can use the Internet for transport – but do so in relative safety.

Enter Onionshare

It is rather complex to easily – and securely – share files across the Internet. It can be done easily – via email, ftp, and cloud servers. It can be done reasonably securely – via encrypted email, secure ftp, p2p (e.g., BitTorrent), and even secure cloud services. But all of these secure solutions are relatively difficult to implement. What is needed is a simple tool. Onionshare is just such a tool.

Onionshare was developed by Micah Lee in 2014. It is an application that sets up a hidden service on the TOR network. TOR is a multi-layered encryption and routing tool that was originally developed by the Department of the Navy. Today, it is the de facto reference implementation for secure, point-to-point connections across the Internet. And while it is not a strictly anonymous service, it offer a degree of anonymity that is well beyond the normal browsing experience. For a detailed description of Tor, take a look here. And for one of my first posts about TOR, look here.

Onionshare sets up a web server. It then establishes that server as a .onion service on the TOR network. The application then generates a page (and a URL) for that service. This URL points to a web page with the file(s) to be transferred. The person hosting the file(s) can then securely send the thoroughly randomized URL to the recipient. Once the recipient receives the URL, the recipient can download the file(s). After the secure file transfer is completed, the service is stopped – and the file(s) no longer available on TOR.

Drawbacks

This secure file transfer model has a few key weaknesses. First and foremost, the URL that is presented must be sent securely to the recipient. This can be done via secure email (e.g., ProtonMail to ProtonMail) or via secure IM (e.g., Signal). But if the URL is sent via insecure methods, the data could be potentially hijacked by a hostile actor. Second, there is no authentication that is performed when the ‘recipient’ connects to the .onion service. Whoever first opens that URL in a TOR browser can access (and later delete) the file(s). So the security of the URL link is absolutely paramount. But as there are no known mechanisms to index hidden .onion servers, this method is absolutely sufficient for most casual users who need to securely send sensitive data back-and-forth.

Bottom Line

If you want to securely send documents back-and-forth between yourself and other individuals, then Onionshare is a great tool. It works on Windows, MacOS, and a variety of Linux distros. And the only client requirement to use the temporary .onion server is a TOR-enabled browser. In short, this is about as ‘fire and forget’ as you could ever expect to find.

2019 Resolution #2: Blocking Online Trackers

The Myth of Online Privacy
The Myth of Online Privacy
Background

Welcome to the New Year. This year could be a banner year in the fight to ensure our online privacy. Before now, the tools of surveillance have overwhelmed the tools of privacy. And the perceived need for new Internet content has outweighed the real difficulty of protecting your online privacy. For years, privacy advocates (including myself) have chanted the mantra of exploiting public key encryption. We have told people to use Tor or a commercial VPN. And we have told people to start using two-factor authentication. But we have downplayed the importance of blocking online trackers. Yes, security and privacy advocates did this for themselves. But most did not routinely recommend this as a first step in protecting the privacy of our clients.

But the times are changing.

Last year (2018) was a pivotal time in the struggle between surveillance and privacy. The constant reporting of online hacks has risen to a deafening roar. And worse still, we saw the shepherds of our ‘trusted platforms’ go under the microscope. Whether it was Sundar Pichai of Google or Mark Zuckerberg of Facebook, we have seen tech leaders (and their technologies) revealed as base – and ultimately self-serving. Until last year, few of us realized that if we don’t pay for a service, then we are the product that the service owners are selling. But our eyes have now been pried open.

Encryption Is Necessary

Security professionals were right to trumpet the need for encryption. Whether you are sending an email to your grandmother or inquiring about the financial assets that you’ve placed into a banker’s hands, it is not safe to send anything in clear text. Think of it this way. Would you put your tax filing on a postcard so that the mail man – and every person and camera between you and the IRS – could see your financial details? Of course you wouldn’t. You’d seal it in an envelope. You might even hand deliver it to an IRS office. Or more recently, you might send your return electronically – with security protections in place to protect key details of your financial details.

But these kinds of protections are only partial steps. Yes, your information is secure from when it leaves your hands to when it enters the hands of the intended recipient. But what happens when the recipient gets your package of information?

Encryption Is Not Enough

Do the recipients just have your ‘package’ of data or do they have more? As all of us have learned, they most certainly have far more information. Yes, our ISP (i.e., the mail man) has no idea about the message. But what happens when the recipient at the other end of the pipe gets your envelope? They see the postmarks. They see the address. But they could also lift fingerprints from the envelope. And they can use this data. At the same time, by revealing your identity, you have provided the recipient with critical data that could be used to profile you, your friends and family, and even your purchasing habits.

So your safety hinges upon whether you trust the recipients to not disclose key personal information. But here’s the rub. You’ve made a contract with the recipient whereby they can use any and all of your personally identifiable information (PII) for any purpose that they choose. And as we have learned, many companies use this information in hideous way.

Resist Becoming The Product

This will be hard for many people to hear: If you’re not paying for a service, then you shouldn’t be surprised when the service provider monetizes any and all information that you have willingly shared with them. GMail is a great service – paid for with you, your metadata, and every bit of content that you put into your messages. Facebook is phenomenal. But don’t be surprised when MarkeyZ sells you out.

Because of the lessons that I’ve learned in 2018, I’m starting a renewed push towards improving my privacy. Up until now, I’ve focused on security. I’ve used a commercial VPN and/or Tor to protect myself from ISP eavesdropping. I’ve built VPN servers for all of my clients. I’ve implemented two-factor authentication for as many of my logons as my service providers will support.

Crank It Up To Eleven

And now I have to step up my game.

  1. I must delete all of my social media accounts. That will be fairly simple as I’ve already gotten rid of Facebook/Instagram, Pinterest, and Twitter. Just a few more to go. I’m still debating about LinkedIn. I do pay for premium services. But I also know that Microsoft is selling my identity. For the moment, I will keep LinkedIn as it is my best vehicle for professional interactions.
  2. I may add a Facebook account for the business. Since many customers are on Facebook, I don’t want to abandon potential customers. But I will strictly separate my public business identity/presence from my personal identity/presence.
  3. I need to get off of Gmail. This one will be tougher than the first item. Most of my contacts know me from my GMail address (which I’ve used for over fifteen years). But I’ve already created two new email addresses (one for the business and one on ProtonMail). My current plan is to move completely off of GMail by the end of 1Q19.
  4. I am going to exclusively use secure browsing for almost everything. I’ve used ad-blockers for both the browser and for DNS. And I’ve used specific Firefox extensions for almost all other browsing activities that I have done. I will now try and exclusively use the Tor Browser on a virtual machine (i.e., Whonix) and implement NoScript wherever I use that browser. Let’s hope that these things will really reduce my vulnerability on the Internet. I suspect that I will find some sites that just won’t work with Tor (or with NoScript). When I find such sites, I’ll have to intentionally choose whether to use the site unprotected or set up a sandbox (and virtual identities) whenever I use these sites. Either way, I will run such sites from a VM – just to limit my exposure.
  5. I will block online trackers by default. Firefox helps. NoScript also helps. But I will start routinely using Privacy Badger and uMatrix as well.
Bottom Line

In the final analysis, I am sure that there are some compromises that I will need to make. Changing my posture from trust to distrust and blocking all online trackers will be the hardest – and most rewarding – step that I can make towards protecting my privacy.

The Ascension of the Ethical Hacker

Hacker: The New Security Professional

Over the past year, I have seen thousands of Internet ads about obtaining an ‘ethical hacker’ certification. These ads (and the associated certifications) have been around for years. But I think that the notoriety of “Mr. Robot” has added sexiness (and legitimacy) to the title “Certified Ethical Hacker”. But what is an ‘ethical hacker’?

According to Dictionary.com, an ethical hacker is, “…a person who hacks into a computer network in order to test or evaluate its security, rather than with malicious or criminal intent.” Wikipedia has a much more comprehensive definition. But every definition revolves around taking an illegitimate activity (i.e., computer hacking) and making it honorable.

The History of Hacking

This tendency to lionize hacking began when Matthew Broderick fought against the WOPR in “WarGames”.  And the trend continued in the early nineties with the Robert Redford classic, “Sneakers”. In the late nineties, we saw Keanu Reeves as Neo (in “The Matrix”) and Gene Hackman as Edward Lyle (in “Enemy of the State”). But the hacker hero worship has been around for as long as there have been computers to hate (e.g., “Colossus: The Forbin Project”).

But as computer hacking has become routine (e.g., see “The Greatest Computer Hacks” on Lifewire), everyday Americans are now aware of their status as “targets” of attacks.  Consequently, most corporations are accelerating their investment in security – and in vulnerability assessments conducted by “Certified Ethical Hackers”.

So You Wanna Be A White Hat? Start Small

Increased corporate attacks result in increased corporate spending. And increased spending means that there is an ‘opportunity’ for industrious technicians. For most individuals, the cost of getting ‘certified’ (for CISSP and/or CEH) is out of reach. At a corporate scale, ~$15K for classes and a test is not very much to pay. But for gig workers, it is quite an investment. So can you start learning on your own?

Yes, you can start learning on your own. In fact, there are lots of ways to start learning. You could buy books. Or you could start learning by doing. This past weekend, I decided to up my game. I’ve done security architecture, design, and development for a number of years. But my focus has always been on intruder detection and threat mitigation.  It was obvious that I needed to learn a whole lot more about vulnerability assessment. But where would I start?

My starting point was to spin up a number of new virtual systems where I could test attacks and defenses. In the past, I would just walk into the lab and fire up some virtual machines on some of the lab systems. But now that I am flying solo, I’ve decided to do this the same way that hackers might do it: by using whatever I had at hand.

The first step was to set up VirtualBox on one of my systems/servers. Since I’ve done that before, it was no problem setting things up again. My only problem was that I did not have VT-x enabled on my motherboard. Once I did that, things started to move rather quickly.

Then I had to start downloading (and building) appropriate OS images. My first test platform was Tails. Tails is a privacy centered system that can be booted from a USB stick. My second platform was a Kali Linux instance. Kali is a fantastic pen testing platform – principally because it includes a Metasploit infrastructure. I even decided to start building some attack targets. Right now, I have a VM for Raspbian (Linux on the Raspberry Pi), a VM for Debian Linux, one for Red Hat Linux, and a few for Windows targets. Now that the infrastructure is built, I can begin the learning process.

Bottom Line

If you want to be an ethical hacker (or understand the methods of any hacker), then you can start without going to a class. Yes, it will be more difficult to learn by yourself. But it will be far less expensive – and far more memorable. Remember, you can always take the class later.

Continuous Privacy Improvement

In its latest release, Firefox extends its privacy advantage over other browsers. Their efforts at continuous privacy improvement may keep you ahead of those who wish to exploit you.
Firefox 63 Extends Privacy Lead

In the era of Demmings, the mantra was continuous process improvement. The imperative to remain current and always improve continues even to this day. And as of this morning, the Mozilla team has demonstrated its commitment to continuous privacy improvement; the release of Firefox 63 is continuing the commitment of the entire open source community to the principle that Internet access is universal and should be unencumbered.

Nothing New…But Now Universally Available

I’ve been using the new browsing engine (in the form of Firefox Quantum) for quite some time. This new engine is an incremental improvement upon previous rendering engines. In particular, those who enabled tracker protection often had to deal with web sites that would not render very successfully. It then became a trade-off between privacy and functionality.

But now that the main code branch has incorporated the new engine, there is more control over tracker protection. And this control will allow those who are concerned about privacy to still use some core sites on the web. This new capability is not fully matured. But in its current form, many new users can start to implement protection from trackers.

Beyond Rendering

But my efforts at continuous privacy improvement are also including enhanced filtering from my Pi-hole DNS platforms. The Pi-hole has faithfully blocked ads for several years. But I’ve decided to up the ante a bit.

  1. I decided to add regular expressions to increase the coverage of ad blocking. I added the following regex filters:
         
         ^(.+[-_.])??ad[sxv]?[0-9]*[-_.]
         ^adim(age|g)s?[0-9]*[-_.]
         ^adse?rv(e(rs?)?|ices?)?[0-9]*[-.]
         ^adtrack(er|ing)?[0-9]*[-.]
         ^advert(s|is(ing|ements?))?[0-9]*[-_.]
         ^aff(iliat(es?|ion))?[-.]
         ^analytics?[-.]
         ^banners?[-.]
         ^beacons?[0-9]*[-.]
         ^clicks?[-.]
         ^count(ers?)?[0-9]*[-.]
         ^pixels?[-.]
         ^stat(s|istics)?[0-9]*[-.]
         ^telemetry[-.]
         ^track(ers?|ing)?[0-9]*[-.]
         ^traff(ic)?[-.]
  2.      
  3. My wife really desires to access some sites that are more “relaxed” in their attitude. Consequently, I set her devices to use the Cloudfare DNS servers (i.e., 1.1.1.1, and 1.0.0.1). I then added firewall rules to block all Google DNS access. This should allow me to bypass ads embedded in Google devices that configure Goggle’s DNS (e.g., Chromecast, Google Home, etc). I then added these rules to my router.

         iptables -I FORWARD –destination 8.8.8.8 -j REJECT
         iptables -I FORWARD –destination 8.8.4.4 -j REJECT

These updates now block ads on my Roku devices and on my Chromecast devices.

Bottom Line

In the fight to ensure your privacy, it is not enough to “fire and forget” with a fixed set of tools. Instead, you must always be prepared to improve your situation. After all, advertisers and identity thieves are always trying to improve their reach into your wallet. Show them who the real boss is. It should be (and can be) you!

VPNFilter Scope: Talos Tells A Tangled Tale

IoT threats
Hackers want to take over your home.

Several months ago, the team at Talos (a research group within Cisco) announced the existence of VPNFilter – now dubbed the “Swiss Army knife” of malware. At that time, VPNFilter was impressive in its design. And it had already infected hundreds of thousands of home routers. Since the announcement, Talos continued to study the malware. Last week, Talos released its “final” report on VPNFilter. In that report, Talos highlighted that the VPNFilter scope was/is far larger than first reported.

“Improved” VPNFilter Capabilities

In addition to the first stage of the malware, the threat actors included the following “plugins”:

  • ‘htpx’ – a module that redirects and inspects the contents of unencrypted Web traffic passing through compromised devices.
  • ‘ndbr’ – a multifunctional secure shell (SSH) utility that allows remote access to the device. It can act as an SSH client or server and transfer files using the SCP protocol. A “dropbear” command turns the device into an SSH server. The module can also run the nmap network port scanning utility.
  • ‘nm’ – a network mapping module used to perform reconnaissance from the compromised devices. It performs a port scan and then uses the Mikrotik Network Discovery Protocol to search for other Mikrotik devices that could be compromised.
  • ‘netfilter’ – a firewall management utility that can be used to block sets of network addresses.
  • ‘portforwarding’ – a module that allows network traffic from the device to be redirected to a network specified by the attacker.
  • ‘socks5proxy’ – a module that turns the compromised device into a SOCKS5 virtual private network proxy server, allowing the attacker to use it as a front for network activity. It uses no authentication and is hardcoded to listen on TCP port 5380. There were several bugs in the implementation of this module.
  • ‘tcpvpn’ – a module that allows the attacker to create a Reverse-TCP VPN on compromised devices, connecting them back to the attacker over a virtual private network for export of data and remote command and control.
Disaster Averted?

Fortunately, the impact of VPNFilter was blunted by the Federal Bureau of Investigation (FBI). The FBI recommended that every home user reboot their router. The FBI hoped that this would slow down infection and exploitation. It did. But it did not eliminate the threat.

In order to be reasonably safe, you must also ensure that you are on a version of router firmware that protects against VPNFilter. While many people heeded this advice, many did not. Consequently, there are thousands of routers that remain compromised. And threat actors are now using these springboards to compromise all sorts of devices within the home. This includes hubs, switches, servers, video players, lights, sensors, cameras, etc.

Long-Term Implications

Given the ubiquity of devices within the home, the need for ubiquitous (and standardized) software update mechanisms is escalating. You should absolutely protect your router as the first line of defense. But you also need to routinely update every type of device in your home.

Bottom Line
  1. Update your router! And update it whenever there are new security patches. Period.
  2. Only buy devices that have automatic updating capabilities. The only exception to this rule should be if/when you are an accomplished technician and you have established a plan for performing the updates manually.
  3. Schedule periodic audits of device firmware. Years ago, I did annual battery maintenance on smoke detectors. Today, I check every device at least once a month. 
  4. Retain software backups so that you can “roll back” updates if they fail. Again, this is a good reason to spend additional money on devices that support backup/restore capabilities. The very last thing you want is a black box that you cannot control.

As the VPNFilter scope and capabilities have expanded, the importance of remediation has also increased. Don’t wait. Don’t be the slowest antelope on the savanna.

Social Media Schisms Erupt

A funny thing happened on the way to the Internet: social media schisms are once again starting to emerge. When I first used the Internet, there was no such thing as “social  media”. If you were a defense contractor, a researcher at a university, or part of the telecommunications industry, then you might have been invited to participate in the early versions of the Internet. Since then, we have all seen early email systems give way to bulletin boards, Usenet newsgroups, and early commercial offerings (like CompuServe, Prodigy, and AOL). These systems  then gave way to web servers in the mid-nineties.  And by the late nineties, web-based interactions began to flourish – and predominate.

History Repeats Itself

Twenty years ago, people began to switch from AOL to services like MySpace. And just after the turning of the millennium, services like Twitter began to emerge. At the same time, Facebook nudged its way from a collegiate dating site to a full-fledged friendship engine and social media platform. With each new turning of the wheel of innovation, the old has been vanquished by the “new and shiny” stuff.  It has always taken a lot of time for everyone to hop onto the new and shiny from the old and rusty. But each iteration brought something special.

And so the current social media title holders are entrenched. And the problem with their interaction model has been revealed. In the case of Facebook and Twitter, their centralized model may very well be their downfall. By having one central system, there is only one drawbridge for vandals to breach. And while there are walls that ostensibly protect you, there is also a royal guard that watches everything that you do while within the walls. Indeed, the castle/fortress model is a tempting target for enemies (and “friends”) to exploit.

Facebook (and Twitter) Are Overdue

The real question that we must all face is not if Facebook and Twitter will be replaced, but when will it happen. As frustration has grown with these insecure and exposed platforms, many people are looking for an altogether new collaboration model. And since centralized systems are failing us, many are looking at decentralized systems.

A few such tools have begun to emerge. Over the past few years, tools like Slack are starting to replace the team/corporate systems of a decade ago (e.g., Atlassian Jira and Confluence). For some, Slack is now their primary collaboration engine. And for the developers and gamers among us, tools like Discord are gaining notoriety – and membership.

Social Media Schisms Are Personal

But what of Twitter and what of Facebook?  Like many, I’ve tried to live in these walled gardens. I’ve already switched to secure clients. I’ve used containers and proxies to access these tools. And I have kept ahead of the wave of insecurity – so far. But the cost (and risk) is starting to become too great. Last week, Facebook revealed that it had been breached – again. And with that last revelation, I decided to take a Facebook break.

My current break will be at least two weeks. But it will possibly be forever. That is because the cost and risk of these centralized systems is becoming higher than the convenience that these services provide.  I suspect that many of you may find yourselves in the same position.

Of course, a break does not necessarily mean withdrawal from all social media. In fairness, these platforms do provide value. But the social media schisms have to end. I can’t tolerate the politics of some of my friends. But they remain my friends (and my family) despite policy differences that we may have. But I want to have a way of engaging in vigorous debate with some folks while maintaining collegiality and a pacific mindset while dealing with others.

So I’m moving on to a decentralized model. I’ve started a Slack community for my family. My adult kids are having difficulty engaging in even one more platform. But I’m hopeful that they will start to engage. And I’ve just set up a Mastodon account (@cyclingroo@mastodon.cloud) as a Twitter “alternative”. And I’m becoming even more active in Discord (for things like the Home Assistant community).

All of these tools are challengers to Facebook/Twitter. And their interaction model is decentralized. So they are innately more secure (and less of a targeted threat). The biggest trouble with these systems is establishing and maintaining an inter-linked directory.

A Case for Public Meta-directories

In a strange way, I am back to where I was twenty years ago. In the late nineties, my employer had many email systems and many directories. So we built a directory of directories. Our first efforts were email-based hub-and-spoke directories based upon X.500. And then we moved to Zoomit’s Via product (which was later acquired by Microsoft). [Note: After purchase, Microsoft starved the product until no one wanted its outdated technologies.] These tools served one key purpose: they provided a means of linking all directories together

Today, this is all  done through import tools that any user can employ to build personalized contact lists. But as more people move to more and different platforms, the need for a distributed meta–directory has been revealed. We really do need a public white pages model for all users on any platform.

Bottom Line

The value of a directory of directories (i.e., a meta-directory) still exists. And when we move from centralized to decentralized social media systems, the imperative of such directory services becomes even more apparent. At this time, early adopters should already be using tools like Slack, Discord, and even Mastodon. But until interoperability technologies (like meta-directories) become more ubiquitous, either you will have to deal with the hassle of building your own directory or you will have to accept the insecurity inherent in a centralized system.