Is Transitive Trust A Worthwhile Gamble?

When I started to manage Windows systems, it was important to understand the definition of ‘transitive trust’. For those not familiar with the technical term, here is the ‘classic’ definition:

Transitive trust is a two-way relationship automatically created between parent and child domains in a Microsoft Active Directory forest. When a new domain is created, it shares resources with its parent domain by default, enabling an authenticated user to access resources in both the child and parent.

But this dry definition misses the real point. A transitive trust relationship (of any kind) is a relationship where you trust some ‘third-party’ because someone that you do trust also trusts that same ‘third-party’. This definition is also rather dry. But let’s look at an example. My customers (hopefully) trust me. And if they trust me enough, then they also trust my choices concerning other groups that help me to deliver my services to them. In short, they transitively trust my provider network because they trust me.

That all sounds fine. But what happens if your suppliers break your trust? Should your customers stop trusting you? Recently, this very situation occurred between Capital One, their customers, and some third-party technology providers (like Amazon and their AWS platform).

Trust: Hard to Earn – Easy to Lose

Unfortunately, the Amazon AWS technology platform was compromised. So Capital One should legitimately stop trusting Amazon (and its AWS platform). This should remain true until Amazon verifiably addresses the fundamental causes of this disastrous breach. But what should Capital One’s customers do? [Note: I must disclose that I am a Capital One customer. Therefore, I may be one of their disgruntled customers.]

Most people will blame Capital One. Some will blame them for a lack of technical competence. And that is reasonable as Capital One is reaping financial benefits from their customers and from their supplier network. Many other people will blame the hacker(s). It’s hard not to fume when you realize that base individuals are willing to take advantage of you solely for their own benefit. Unfortunately, only a few people will realize that the problem is far more vexing.

Fundamentally, Capital One trusted a third-party to deliver services that are intrinsic to their core business. Specifically, Capital One offered a trust relationship to their customers. And their customers accepted that offer. Then Capital One chose to use an external platform simply to cut corners and/or deliver features that they were unable to deliver on their own. And apparently that third-party was less capable than Capital One assumed.

Regaining Trust

When a friend or colleague breaks your trust, you are wounded. And in addition to this emotional response, you probably take stock of continuing that relationship. You undoubtedly perform and internal risk/reward calculation. And then you add the emotional element about whether this person would act in a more trustworthy fashion in the future. If our relationship with companies was less intimate, then most people would simply jettison their failed provider. But since we build relationships on a more personal footing, most people will want to give their friend (or their friendly neighborhood Bailey Building & Loan) the benefit of the doubt.

So what should Capital One do? First, they must accept responsibility for their error in judgment. Second, they must pay for the damages that they have caused. [Note: Behind the scenes, they must bring the hammer to their supplier.] Third, they must rigorously assess what really led to these problems. And fourth, they must take positive (and irreversible) steps to resolve the root cause of this matter.

Of course, the last piece is the hardest. Oftentimes, the root cause is difficult to sort out given all of the silt that was stirred upon in the delta when the hurricane passed through. Some people will blame the Capital One culture. And there is merit to this charge. After all, the company did trust others to protect the assets of their customers. As a bank, the fundamental job is to protect customer assets. And only when that is done, should the bank owners use the entrusted funds in order to generate a shared profit for their owners (i.e., shareholders) and their customers.

Trust – But Verify

In the height of the Cold War, President Ronald Reagan exhorted the nation to trust – but then to verify the claims of a long-standing adversary. In the case of Capital One, we should do the very same thing. We should trust them to act in their own selfish interests because the achievement of our interests will be the only way that they can achieve their own interests.

That means that we must be part of a robust and two-way dialog with Capital One and their leadership. Will Capital One be big enough to do this? That’s hard to say. But if they don’t, they will never be able to buy back our trust.

Finally, we have to be bold enough to seek verification. As President Reagan said, “You can’t just say ‘trust me’. Trust must be earned.”

Long Past Time For Good Security Headers

HTTP Security Headers Status
The State of HTTP Security Headers

Over the past few months, I’ve focused my attention upon how you can be safer while browsing the Internet. One of the most important recommendations that I have made is for you to reduce (or eliminate) the loading and execution of unsafe content. So I’ve recommended ad blockers, a plethora of browser add-ons, and even the hardening of your premise-based services (e.g., routers, NAS systems, IoT devices, and DNS). Of course, this only addresses one side of the equation (i.e., the demand side). In order to improve the ‘total experience’ for your customers, you will also need to harden the services that you provide (i.e., the supply side). And one of the most often overlooked mechanisms for improvement is the proper use of HTTP security headers.

Background

According to the Open Web Application Security Project (OWASP), content injection is still the single largest class of vulnerabilities that content providers must address. When coupled with cross-site scripting (XSS), it is clear that hostile content poses an existential threat to many organizations. Yes, consumers must block all untrusted content as it arrives at their browser. But every site owner should first ensure that they inform every client about the content that they will be sending. Once these declarations are made, the client (i,e, browser) can then act to trust or distrust the content that they receive.

The notion that a web site should declare the key characteristics of its content stream is nothing new. What we now call a content security policy (CSP) has been around for a very long time. Indeed, the fundamental descriptions of content security policies were discussed as early as 2004. And the first version of the CSP standard was published back in 2012.

CSP Standards Exist – But Are Not Universally Used

According to the White Hat 2018 “Website Security Statistics Report”, a number of industries still operate chronically vulnerable websites. White Hat estimates that 52% of Accommodations / Food Services web sites are “Always Vulnerable”. Moreover, an additional 7% of these websites are “Frequently Vulnerable” (ie., vulnerable for at least 263 days a year). Of course, that is the finding for one sector of the broader marketplace. But things are just as bad elsewhere. In the healthcare market, 50% of websites are considered “Always Vulnerable” with an additional 10% classified as “Frequently Vulnerable”.

Unfortunately, few websites actually use one of the most potent elements in their arsenal. Most website operators have established software upgrade procedures. And a large number of them have acceptable auditing and reporting procedures. But unless they are subject to regulatory scrutiny, few organizations have even considered implementing a real CSP.

Where To Start

So let’s assume that you run a small business. And you had your daughter/son, niece/nephew, friend of the family, or kid next door build your website. Chances are good that your website doesn’t have a CSP. To check this out for sure, you should go to https://securityheaders.com and see if you have appropriate security headers for your website.

In my case, I found that my website security posture was unacceptably low. [Note: As a National Merit Scholar and Phi Beta Kappa member, anything below A+ is unacceptable.] Consequently, I looked into how I could get a better security posture. Apart from a few minor tweaks, my major problem was that I didn’t have a good CSP in place.

Don’t Just Turn On A Security Policy

Whether you code the security headers in your .htaccess file or you use software to generate the headers automatically, you will be tempted to just turn on a security policy. While that is a laudable sentiment, I urge you not to do this – unless your site is not live. Instead, make sure that you use your proposed CSP in “report only” mode – as a starting point.

Of course, I chose the engineer’s path and just set up a default-src directive to allow only local content. Realistically, I just wanted to see content blocked. So I activated my CSP in “blocking” mode (i.e., not “report only”) mode. And as expected, all sorts of content was blocked – including the fancy sliders that I had implemented on my front page.

I quickly reset the policy to “report only” so that I could address the plethora of problems. And this time, I worked each problem one at a time. Surprisingly, it really did take some time. I had to determine which features came from which external sources. I then had to add these sources to the CSP. This process was very much like ‘whitelisting’ external sources in an ad blocker. But once I found all of the external sources, I enabled “blocking” mode. This time, my website functioned properly.

Bottom Line

In the final analysis, I learned a few important things.

  1. Security headers are an effective means of informing client browsers about the characteristics of your content – and your content sources. Consequently, they are an excellent means of displaying your content whitelist to any potential customer.
  2. Few website builders automatically generate security headers. There is no “Great and Powerful Oz” who will code all of this from behind the curtains – unless you specifically pay someone to do it. Few hosting platforms do this by default.
  3. Tools do exist to help with coding security headers – and content security policies. In the case of Wrodpress, I used HTTP Headers (by Dimitar Ivanov).
  4. While no single security approach can solve all security issues, using security headers should be added to the quiver of tools that you use when addressing website content security.

The Ascension of the Ethical Hacker

Hacker: The New Security Professional

Over the past year, I have seen thousands of Internet ads about obtaining an ‘ethical hacker’ certification. These ads (and the associated certifications) have been around for years. But I think that the notoriety of “Mr. Robot” has added sexiness (and legitimacy) to the title “Certified Ethical Hacker”. But what is an ‘ethical hacker’?

According to Dictionary.com, an ethical hacker is, “…a person who hacks into a computer network in order to test or evaluate its security, rather than with malicious or criminal intent.” Wikipedia has a much more comprehensive definition. But every definition revolves around taking an illegitimate activity (i.e., computer hacking) and making it honorable.

The History of Hacking

This tendency to lionize hacking began when Matthew Broderick fought against the WOPR in “WarGames”.  And the trend continued in the early nineties with the Robert Redford classic, “Sneakers”. In the late nineties, we saw Keanu Reeves as Neo (in “The Matrix”) and Gene Hackman as Edward Lyle (in “Enemy of the State”). But the hacker hero worship has been around for as long as there have been computers to hate (e.g., “Colossus: The Forbin Project”).

But as computer hacking has become routine (e.g., see “The Greatest Computer Hacks” on Lifewire), everyday Americans are now aware of their status as “targets” of attacks.  Consequently, most corporations are accelerating their investment in security – and in vulnerability assessments conducted by “Certified Ethical Hackers”.

So You Wanna Be A White Hat? Start Small

Increased corporate attacks result in increased corporate spending. And increased spending means that there is an ‘opportunity’ for industrious technicians. For most individuals, the cost of getting ‘certified’ (for CISSP and/or CEH) is out of reach. At a corporate scale, ~$15K for classes and a test is not very much to pay. But for gig workers, it is quite an investment. So can you start learning on your own?

Yes, you can start learning on your own. In fact, there are lots of ways to start learning. You could buy books. Or you could start learning by doing. This past weekend, I decided to up my game. I’ve done security architecture, design, and development for a number of years. But my focus has always been on intruder detection and threat mitigation.  It was obvious that I needed to learn a whole lot more about vulnerability assessment. But where would I start?

My starting point was to spin up a number of new virtual systems where I could test attacks and defenses. In the past, I would just walk into the lab and fire up some virtual machines on some of the lab systems. But now that I am flying solo, I’ve decided to do this the same way that hackers might do it: by using whatever I had at hand.

The first step was to set up VirtualBox on one of my systems/servers. Since I’ve done that before, it was no problem setting things up again. My only problem was that I did not have VT-x enabled on my motherboard. Once I did that, things started to move rather quickly.

Then I had to start downloading (and building) appropriate OS images. My first test platform was Tails. Tails is a privacy centered system that can be booted from a USB stick. My second platform was a Kali Linux instance. Kali is a fantastic pen testing platform – principally because it includes a Metasploit infrastructure. I even decided to start building some attack targets. Right now, I have a VM for Raspbian (Linux on the Raspberry Pi), a VM for Debian Linux, one for Red Hat Linux, and a few for Windows targets. Now that the infrastructure is built, I can begin the learning process.

Bottom Line

If you want to be an ethical hacker (or understand the methods of any hacker), then you can start without going to a class. Yes, it will be more difficult to learn by yourself. But it will be far less expensive – and far more memorable. Remember, you can always take the class later.

Social Media Schisms Erupt

A funny thing happened on the way to the Internet: social media schisms are once again starting to emerge. When I first used the Internet, there was no such thing as “social  media”. If you were a defense contractor, a researcher at a university, or part of the telecommunications industry, then you might have been invited to participate in the early versions of the Internet. Since then, we have all seen early email systems give way to bulletin boards, Usenet newsgroups, and early commercial offerings (like CompuServe, Prodigy, and AOL). These systems  then gave way to web servers in the mid-nineties.  And by the late nineties, web-based interactions began to flourish – and predominate.

History Repeats Itself

Twenty years ago, people began to switch from AOL to services like MySpace. And just after the turning of the millennium, services like Twitter began to emerge. At the same time, Facebook nudged its way from a collegiate dating site to a full-fledged friendship engine and social media platform. With each new turning of the wheel of innovation, the old has been vanquished by the “new and shiny” stuff.  It has always taken a lot of time for everyone to hop onto the new and shiny from the old and rusty. But each iteration brought something special.

And so the current social media title holders are entrenched. And the problem with their interaction model has been revealed. In the case of Facebook and Twitter, their centralized model may very well be their downfall. By having one central system, there is only one drawbridge for vandals to breach. And while there are walls that ostensibly protect you, there is also a royal guard that watches everything that you do while within the walls. Indeed, the castle/fortress model is a tempting target for enemies (and “friends”) to exploit.

Facebook (and Twitter) Are Overdue

The real question that we must all face is not if Facebook and Twitter will be replaced, but when will it happen. As frustration has grown with these insecure and exposed platforms, many people are looking for an altogether new collaboration model. And since centralized systems are failing us, many are looking at decentralized systems.

A few such tools have begun to emerge. Over the past few years, tools like Slack are starting to replace the team/corporate systems of a decade ago (e.g., Atlassian Jira and Confluence). For some, Slack is now their primary collaboration engine. And for the developers and gamers among us, tools like Discord are gaining notoriety – and membership.

Social Media Schisms Are Personal

But what of Twitter and what of Facebook?  Like many, I’ve tried to live in these walled gardens. I’ve already switched to secure clients. I’ve used containers and proxies to access these tools. And I have kept ahead of the wave of insecurity – so far. But the cost (and risk) is starting to become too great. Last week, Facebook revealed that it had been breached – again. And with that last revelation, I decided to take a Facebook break.

My current break will be at least two weeks. But it will possibly be forever. That is because the cost and risk of these centralized systems is becoming higher than the convenience that these services provide.  I suspect that many of you may find yourselves in the same position.

Of course, a break does not necessarily mean withdrawal from all social media. In fairness, these platforms do provide value. But the social media schisms have to end. I can’t tolerate the politics of some of my friends. But they remain my friends (and my family) despite policy differences that we may have. But I want to have a way of engaging in vigorous debate with some folks while maintaining collegiality and a pacific mindset while dealing with others.

So I’m moving on to a decentralized model. I’ve started a Slack community for my family. My adult kids are having difficulty engaging in even one more platform. But I’m hopeful that they will start to engage. And I’ve just set up a Mastodon account (@cyclingroo@mastodon.cloud) as a Twitter “alternative”. And I’m becoming even more active in Discord (for things like the Home Assistant community).

All of these tools are challengers to Facebook/Twitter. And their interaction model is decentralized. So they are innately more secure (and less of a targeted threat). The biggest trouble with these systems is establishing and maintaining an inter-linked directory.

A Case for Public Meta-directories

In a strange way, I am back to where I was twenty years ago. In the late nineties, my employer had many email systems and many directories. So we built a directory of directories. Our first efforts were email-based hub-and-spoke directories based upon X.500. And then we moved to Zoomit’s Via product (which was later acquired by Microsoft). [Note: After purchase, Microsoft starved the product until no one wanted its outdated technologies.] These tools served one key purpose: they provided a means of linking all directories together

Today, this is all  done through import tools that any user can employ to build personalized contact lists. But as more people move to more and different platforms, the need for a distributed meta–directory has been revealed. We really do need a public white pages model for all users on any platform.

Bottom Line

The value of a directory of directories (i.e., a meta-directory) still exists. And when we move from centralized to decentralized social media systems, the imperative of such directory services becomes even more apparent. At this time, early adopters should already be using tools like Slack, Discord, and even Mastodon. But until interoperability technologies (like meta-directories) become more ubiquitous, either you will have to deal with the hassle of building your own directory or you will have to accept the insecurity inherent in a centralized system.

Home Automation “Quest for Fire”

Home-Automation-Diagram
Home Automation

This weekend, we took another step in our home automation quest. We have used smart switches (for lamps), smart thermostats, smart music, smart cars, and even smart timers. But until Saturday, we did not have any smart lights, per se. On Saturday, we bought some Philips Hue lights (and the associated hub). That means that we now have Ethernet (i.e., wired) devices, Wifi devices, and now Zigbee devices.

Is this a big deal? The answer to that is somewhat nuanced. We’ve had smart home puzzle pieces for a while. And we almost bought a Z-Wave infrastructure to put smart switches in place. But the age of our house makes this impractical. [We don’t have neutral wires on any switches in the house. And the price to refurbish these switches would be prohibitive.]  So our home automation quest stalled. But on Saturday, I could take it no more. When we went out on errands, we stopped and picked up five (5) Hue lights.

Just Add Lights

The installation and setup was simple. It took almost no time to get everything installed and paired. And within a little more than an hour, we had functioning lights in the second floor hallway and in our master bedroom.  Over the next year, we can start to populate the various ceiling fans in the house. I figure that we can do this whenever we need to replace the incandescent bulbs that are currently installed. Given our current pace of replacement, I’m figuring that it will take a year or so to retrofit the house.

After getting everything installed, I started to make an inventory of our various smart home investments. As of today, we have the following pieces:

Current “On-Premises” Infrastructure

Today, we have so many physical (and logical) pieces in our home automation puzzle:

  • Network: Cisco network switch, Cisco VPN appliance, Netgear router, NordVPN proxy, Raspberry Pi ad blocking, Raspberry Pi DNS
  • Print: Hewlett-Packard printer
  • Entertainment: Plex media server (on PC desktop), Roku media player, Samsung TV, Silicon Dust HDHomeRun player
  • Storage: Synology storage, WD MyCloud storage
  • IoT: Amazon Echo Dot speakers, Huawei sensor/camera (on surplus phone), Kia Soul, Personal location / presence (on personal phones), Philips Hue lights, Raspberry Pi home automation appliance, TP-Link Kasa switches, WeightGURUS scale

Current “Off-Premises” Services

While we have lots of smart pieces in the house, we also have more than a few external cloud services providers. In most of these cases, these services allow us to extend “access” beyond the confines of our network. Our current list of services includes:

  • Lobostrategies Business: Bluehost, GoDaddy
  • Olsen Personal: Amazon Alexa, Dropbox, Google Drive, Google GMail, Home Assistant cloud, IFTTT cloud, Plex cloud, Pushbullet cloud, TP-Link Kasa cloud, WD MyCloud

So after adding yet another home automation “category” to the premises, we learned an important lesson: external access requires a measure of trust – and diligence. If you aren’t willing to secure your devices, then you must accept the consequences of an electronic intrusion.

Application Security: Yet Another Acronym as a Service (YAAaaS)

Over the past dozen or so years, we have seen the emergence of Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). In fact, there are dozens of “as a Service” acronyms. All of these terms have sprung from the service-oriented architecture (SOA) movement of the nineties. These days, I think of the ‘aaS’ acronyms as ‘table stakes’ in the competitive world of IT. You can think of them as ‘value containers’ where data and process are combined into a ‘service’ that you can purchase in the marketplace. Today, almost anything can be purchased “as a service” – including application security.

The Push Against Commoditization

I sometime think of IT as a cathedral where the priests are consulted, birds are sacrificed, censers are set on fire, and tribute is paid to the acolytes and the priests. [Note: The notion of IT priests is not new. Eric Raymond wrote about it in “The Cathedral and the Bazaar” (a.k.a., CatB). For those that are part of the ecclesiastical hierarchy (i.e., the tech elites), the priesthood is quite profitable. And for them, there is little incentive to commoditize the process of IT.

In the nineties, the process of IT delivery required 8A consultants – and legions of IT staffers. The final result of this kind of expensive IT is commodity IT. Indeed, the entire migration towards outsourcing was a response (by the business) to inflexible and expensive IT. Because of this, IT has been locked in a struggle against the inevitable. As more individuals have gotten into the IT business, prices have dropped – sometimes calamitously. Consequently, IT has kept the wheel spinning by creating newer and better “architectures” that can (ostensibly) propel IT technology and services ever forward.

The Inexorable Victory of Commoditization

We are now starting to see the ‘aaS’ movement move toward higher-order functions. In the past, IT commoditized the widgets (like systems, storage, and networks). Recently, IT has transformed its own business through service management, streamlined development, and continuous process improvement. Now, businesses (and IT) are commoditizing more complex things – like applications. This includes communications (email and collaboration), sales force (e.g., SAP), procurement (e.g., SAP, Oracle, etc), operations management, service management (i.e., service desks), and even strategic planning (through data mining, business intelligence, and “Big Data” initiatives).

And today, even services such as data security, identity management, and privacy are being transformed on the altar of commoditization. In the enterprise space, you can buy appliances for DNS, for identity management, for proxy services, for firewalls, and for content management (like ad blocking and virus/malware detection). You can even go into a Best Buy and purchase the Disney Circle to ensure that your kids are safe at home.

Security and Application Security

The infrastructure components of enterprise security have been commoditized for almost two decades. And if you knew where to look, you might have found personal items (like Yubikeys) as a means of performing two-factor authentication. But now, Google is going to sell security tokens. [Note: This is just like their entry into video streaming market with the Chromecast.] This marks the point where identity management is becoming a commodity.

At the same time, security services themselves are being commoditized. In particular, you can now deploy security systems in your house without needing any security certification (i.e., Security+, CISSP, etc). You can buy cameras, motion detectors, door/window sensors, and alarm systems either with or without contracts. The new guys on the block (e.g., SimpliSafe) and the “big boys” (like Comcast) are all getting into the business of monitoring your household – and ensuring your security.

As for me, I’ve been plugging all sorts of new application-layer security features into my infrastructure.  I added DNS security to my infrastructure through using a third-party service (i.e., Cloudflare). I implemented identity management capabilities on my site. I’ve tested and deployed two-factor authentication. And I’ve added CAPTCHA capabilities for logins, comments, and contact requests. For lack of a better term, I’m calling all of this Application Security as a Service (i.e., ASaaS).

Bottom Line

I’m not doing anything new. Indeed, these kinds of things have been part of enterprise IT for years. But as a business owner/operator, I can now just plug these things into an owned (or leased) infrastructure. I don’t need a horde of minions to build all of this. Instead, I can build security into my business by simply plugging the right application security elements into my site.

Obviously this is not yet idiot proof. There is still a place for “integrators” who can stitch everything together. But with every passing day, I feel even more like my wife – who is a quilter. Architects design and use ‘patterns’ in order to construct the final product. The supply chain team buys commodity components (like the batting and the backing). Developers then cut out the pieces that make the quilt. Integrators then stitch these together – along with the commodity components.  IT takes the disassembled pieces to someone else who can machine “quilt” everything together. In the end, the “quilt” (i.e., the finished product) can be completed at a tremendously reduced price.

Ain’t commoditization grand?!

Browser Security Bypasses Abound

browser security at risk
Browser Security At Risk

Browser Security Threats Discovered

According to the Catholic University of Leuven in Belgium (KU), every modern browser is susceptible to at least one method of bypassing browser security and user privacy.  In an article on the subject, Catalin Cimpanu (of BleepingComputer) reported that new (and as yet unexploited) means of bypassing cookie controls are apparently possible.  KU researchers reported their findings to the browser developers and posted their results at wholeftopenthecookierjar.eu.

Don’t expect all browser vendors to solve all browser security issues immediately. Indeed, expect many people to howl about how these vulnerabilities were reported. But regardless of the manner in which the news was delivered, every customer must take it upon themselves to implement multiple layers of protection. A comprehensive approach should (at a minimum) include:

  1.  A safe browser,
  2. Safe add-ons (or extensions) that include cookie and browser element management (e.g., uBlock Origins, NoScript, and uMatrix)
  3. A means of reducing (and possibly eliminating) Javascript, and
  4. Effective blocking of “well-known” malware domains.
Bottom Line

Shrek was right.  Ogres are like onions – and so is security. Effective security must include multiple layers. Be an ogre; use layers of security

Browser Security: Who Do You Trust?

Browser Security Defended by Mozilla

So you think that you are safe. After all, you use large, complex, and unique passwords everywhere. You employ a strong password safe/vault to make sure that your passwords are “strong” – and that they are safe. At the same time, you rely upon multi-factor authentication to prove that you are who you say that you are. Similarly, you use a virtual private network (VPN) whenever you connect to an unknown network. Finally, you are confident in your browser security since you use the “safest” browser on the market.

Background

Historically, geeks and security wonks have preferred Mozilla Firefox. That’s not just because it is open source. After all Google Chrome is open source. It’s because Firefox has a well-deserved reputation for building a browser that is divorced from an advertising-based revenue stream. Basically, Firefox is not trying to monetize the browser. Unlike Chrome (Google) and Edge (Microsoft), Firefox doesn’t have an advertising network that must be “preferred” in the browser. Nor does Firefox need to support ‘big players’ because they are part of a business arrangement. Consequently, Firefox has earned its reputation for protecting your privacy.

But as Robert “Bobby” Hood has noted, the browser that you choose may not make much difference in your browser security posture. He wrote more bluntly; he said, “[Browser difference] …doesn’t matter as much as you may think… Is it important which browser we use? Sure, but with a caveat. Our behavior is far more important than nitpicking security features and vulnerabilities.” He is right. There are far more effective means of improving security and ensuring privacy. And the most important things are your personal practices. Bobby said it best: “Would you park your Maserati in a bad part of town and say, ‘It’s okay. The doors are locked!’ No. Because door locks and alarm systems don’t matter if you do dumb things with your car.”

What Have You Done For Me Lately?

It is always good to see when one of the browser creators takes positive steps to improve the security of their product. On August 16th, Catalin Cimpanu highlighted the recent (and extraordinary) steps taken by Mozilla. In his article on BleepingComputer (entitled “Mozilla Removes 23 Firefox Add-Ons That Snooped on Users”), he highlighted the extraordinary steps take by Mozilla’s addons.mozilla.org (AMO) team. In particular, they researched hundreds of add-ons and they determined that twenty-three (23) of them needed to be eliminated from AMO. Mozilla removed the following browser plugins from AMO [Note: These include (but aren’t limited to…]:

  • Web Security
  • Browser Security
  • Browser Privacy
  • Browser Safety
  • YouTube Download & Adblocker Smarttube
  • Popup-Blocker
  • Facebook Bookmark Manager
  • Facebook Video Downloader
  • YouTube MP3 Converter & Download
  • Simply Search
  • Smarttube – Extreme
  • Self Destroying Cookies
  • Popup Blocker Pro
  • YouTube – Adblock
  • Auto Destroy Cookies
  • Amazon Quick Search
  • YouTube Adblocker
  • Video Downloader
  • Google NoTrack
  • Quick AMZ

Mozilla also took the extraordinary step of ‘disabling’ these add-ons for users who had already installed them. While I might quibble with such an ‘authoritarian’ practice, I totally understand why Mozilla took all of these actions. Indeed, you could argue that these steps are no different than the steps that Apple has taken to secure its App Store.

Bottom Line

In the final analysis, browser security is determined by the operation of the entire ecosystem. And since very few of us put a sniffer on the network whenever we install a plugin, we are forced to “trust” that these add-ons perform as documented. So if your overall browser security is based upon trust, then who do you trust to keep your systems secure? Will you trust companies that have a keen interest in securing ‘good’ data from you and your systems? Or will you trust someone who has no such vested interests?

DNS Security: The Final Chapter, For Now

DNS Security Challenges
DNS Security Challenges

As a man of faith, I am often confronted with one sorry truth: my desires often exceed my patience. So it was with my extended DNS security project. I have written three out of four articles about DNS security. But I have taken a detour from my original plan.

The first article that I wrote outlined the merits of using the Trusted Recursive Resolver that showed up in Firefox 61. I concluded that the merits of encrypting DNS payloads were obvious and the investment was a necessary one – if you want to ensure privacy. The second article outlined the merits (and methods) of using DNS -Over-HTTPS (DoH) to secure references to external referrers/resolvers. In the third article, I outlined how I altered my DNS/DHCP infrastructure to exploit DNSMasqd.

That left me with the final installment. And the original plan was to outline how I had implemented Unbound as a final means of meeting all of my DNS security requirements. Basically, I had to outline why I would want something other than a simple DNS referral agent. That is a great question. But to answer that question, I need to provide a little background.

DNS Background

The basic DNS infrastructure is a hierarchical data structure that is traversed from top to bottom (or right to left when reading a domain name). When a customer wants to know the IP address of a particular device, the top-level domain (TLD) is queried first. So if looking for www.lobostrategies.com, one must first search the registry for all ‘.com’ domains. The authoritative server for ‘.com’ domains contains a reference to the authoritative DNS server for lobostrategies.com (i.e., GoDaddy).

The next step is to search the authoritative domain server for the address of the specific server. In my case, GoDaddy would be queried to determine the address for www.lobostrategies.com. GoDaddy would either answer the question or send it to a DNS server supporting the next lower level of the domain hierarchy. Since there are no subdomains (for lobostrategies.com), GoDaddy returns the IP address.

The ISP Advantage

The process of searching from the root to the proper branch (that answers the query) is called recursive searching. And it is the heart of how DNS works. But this burden is not carried by every user. Can you imagine if every user queried the top-level domain servers? It would be an incredible volume of queries. Instead, the results of most queries are stored (i.e., cached) at lower levels of the tree. For example, companies like your cable ISP (or Google, or Cloudflare, or OpenDNS) will be your ‘proxy’ for all requests between you and the host name that you want to resolve into an IP address.

Your ISP has almost every result of top-level domain queries already stored in its cache. So your answer would be delivered with at least one fewer step than it would have required for you to ask the question yourself. And since most public DNS resolvers have massive results already cached, you would never have to go to GoDaddy to get the IP address for my website. So rather than issuing a query to the root and a query to GoDaddy, your ISP can just provide the address directly to you – reducing your name search activity in half. Therefore, most users consult a DNS service that does the searching for them.

Hidden Costs

But think about what it is costing you to do the search yourself. The first time you query a domain, it takes time (and cache memory). But after the one-time ‘charges’ are paid, isn’t running my own recursive DNS a low-cost investment? Yes it is, and no it isn’t. The real cost of DNS is the hidden cost of privacy. If you run your own recursive DNS server, then you have to pay the entry costs (in hardware and in slower initial resolve times).

If you ‘trust’ someone else to do this for you, then they have access to all of your DNS queries. They know who you are and what you are trying to see. They won’t know what you saw in your query to a specific site. But they will know that you wanted to know where a particular site could be found.

Bottom line: To use a DNS resolver/referrer, you are necessarily letting that service provider know about your probably website destinations.

By using a recursive DNS, you are only letting the domain owner for a site know that you are looking for their address. Google would only get query data when you were intending to connect to Google services. So Google would only see a subset of your DNS queries – thereby improving DNS security.

On the flip side, you really do want a service that will encrypt the DNS payload. Recursive DNS tools (like the Unbound tool in DD-WRT and Pi-hole) do not yet support robust encryption for their recursive queries. Indeed, only two DNS providers currently support DOH (i.e., Google and Cloudflare). By selecting to use a recursive DNS that you manage yourself, you are limiting the ability to mask DNS search requests as they move across the Internet. In practice, this means that you will have a higher risk of being exploited by a man-in-the-middle (MIM) DNS attack. And this includes things like DNS spoofing.

The Choice

So I was faced with a simple choice: 1) I could implement a solution with encryption to a trusted recursive DNS provider, or 2) I could pay the upfront price of running my own recursive DNS. When I started to write this series of articles, I was feeling very distrustful of all DNS providers. So I was leaning towards running my own recursive DNS and limiting the search data that my selected provider could exploit. But the more that I thought about it, the more that I questioned that decision. Yes, I don’t trust companies to place me above their bottom line. And I don’t want the ‘gubmint’ to have a choke point that they could exploit. After all, didn’t the 2016 presidential campaign demonstrate that both parties want to weaponize the information technology?

But the fear of all companies and all politicians is a paranoid conceit. And I don’t want to be the paranoid old man who is always watching over his shoulder. More importantly, the real challenge / threat is the proven risk that script-kiddies, hackers, and criminals might target my data while it is in transit. So as I compared a paranoid fear versus a real fear, I started moving towards desiring encrypted DNS queries more than limiting third-party knowledge of my queries.

The Choice Deferred

Just as I was about to implement changes based upon a re-assessment, I inadvertently shot myself in the foot. I was listening to a podcast about information security (i.e., Security Now by Steve Gibson) and I heard about a resurgence of router-based password exploits. I had long ago switched to a password manager. So I wanted more entropy in my randomized password. I checked online to see if there were any password standards for DD-WRT. I found nothing. So I figured that if the software didn’t like a password, then it would stop me before implementing the change.

I plunged ahead and created a 64-character randomized password. The software took the change – even validating the password against a second-entry of the password. But when I went to log back in to the router, my shiny new password was rejected.

Wash, Rinse, Repeat

I was getting frustrated. I looked to see if there was any way to revert back to an older password. But there was no such capability. And the only way to log back into my router would be to factory-reset the device – which I did. But it took a very long-time to recover (~2.5 hours). So after a few hours, I was back to where I started.

Before I tried again, I backed up the non-volatile memory (nvram). Then I decided to try a shorter password. After failing with the 64-character password, I tried a 32-character password. Unfortunately, it resulted in an inaccessible router. So I restored my router and then I was back to the starting line, again.

After researching the issue for forty-five minutes, I found someone that had run into the same problem. They had solved it by using a twenty-two (22) character password. So I earnestly tried to change the password to an eighteen (18) character password. I was hopeful; my hopes were crushed. But I was getting good at doing the factory reset. So after three attempts and almost five (5) hours of effort, I went back to the old password format that I had used before. Lo and behold, it worked.

Overall DNS Security Improvements

After spending the bulk of an evening on this effort, I was glad to be back on track. But I had a fresh appreciation for doing as little as possible to my router and my DNS infrastructure. I already has a working DNS that used DOH to communicate with Cloudflare. And I had done enough research to be less skeptical of the Cloudflare DNS (when compared to ISP DNS and Google DNS).

I now have a DNS service separated from my router. And the DNS and DHCP systems are running on a unified system – thus making reverse-lookups and local queries far more effective. Finally, I have a DNS request facility that should be more secure against man-in-the-middle attacks. So without much more fanfare, I will call this DNS security battle a victory – for now. And I will keep my eyes wide open – both against corporate/government exploitation and against self-inflicted computing wounds!

TRR = Totally Risky Referrer

Totally-Risky-Resolver
TRR Is Anything But Trusted

When Homer Simpson says “doh”, you know that something stupid is about to happen. Unfortunately, I believe that the same thing is true about the upcoming Firefox feature called DNS over HTTPS (i.e., DOH). Developers at Firefox noted a real problem: DNS queries aren’t secure. This has been axiomatic for years. That’s why DNS developers created DNSSEC. But DNSSEC is taking forever to roll out. Consequently, the Firefox developers baked Trusted Recursive Resolver (TRR) into Firefox 61. [Note: TRR has been available since Firefox 60. But TRR will move from an experiment to a reality as Firefox 61 rolls out across the Internet.]

Background

One of the key design points of TRR is  the encapsulation of data in a secure transport mechanism. Theoretically, this will limit man-in-the-middle attacks that could compromise your browsing history (or redirect your browser altogether). Of course, theory is not always reality. Yes, SSL/TLS is more secure than plain text. But it is widely used. So it is burdened by the need to retain backward-compatibility. Nevertheless, it is more secure than plain text. And security conscious consumers can implement TRR even if their local DNS provider doesn’t currently offer DNSSEC.

Risk

So why is TRR so risky? That’s simple: Mozilla is implementing TRR with a single recommended resolver: Cloudflare. I don’t think that anyone has an axe to grind with Cloudflare. From all that I have read, Cloudflare has never used customer data exclusively for its own benefit. That’s not true for Google, or OpenDNS, or a lot of other DNS providers. Of course, Cloudflare is a relative newcomer. So their track record is limited. But the real issue is that Mozilla has designed a system with a single point of failure – and a single choke point for logging and control.

Mitigation

Fortunately, Mozilla has enabled changing the TRR mode and TRR URI. Unfortunately, it is currently managed only through the about:config interface. That’s fine for a technician. But it is a dreadful method for end users. I am hopeful that Mozilla will provide a better interface for users. And I certainly hope that it is implemented on an “opt-in” basis. If they don’t, then folks who use their own DNS (e.g., every Pi-hole user) or folks who specify a different public provider than Cloudflare (e.g., Google, OpenDNS, DNS.Watch, etc) will be forced to “touch” every workstation.

Bottom Line:

Is Firefox acting badly? Probably not. After all, they are trying to close a huge DNS hole that infrastructure providers have yet to close (i.e., DNSSEC). Nonetheless, their approach is ham-handed. Mozilla needs to be transparent with the why’s and when’s – and they need to trust their users to “do the right thing.”