Home Automation “Quest for Fire”

Home-Automation-Diagram
Home Automation

This weekend, we took another step in our home automation quest. We have used smart switches (for lamps), smart thermostats, smart music, smart cars, and even smart timers. But until Saturday, we did not have any smart lights, per se. On Saturday, we bought some Philips Hue lights (and the associated hub). That means that we now have Ethernet (i.e., wired) devices, Wifi devices, and now Zigbee devices.

Is this a big deal? The answer to that is somewhat nuanced. We’ve had smart home puzzle pieces for a while. And we almost bought a Z-Wave infrastructure to put smart switches in place. But the age of our house makes this impractical. [We don’t have neutral wires on any switches in the house. And the price to refurbish these switches would be prohibitive.]  So our home automation quest stalled. But on Saturday, I could take it no more. When we went out on errands, we stopped and picked up five (5) Hue lights.

Just Add Lights

The installation and setup was simple. It took almost no time to get everything installed and paired. And within a little more than an hour, we had functioning lights in the second floor hallway and in our master bedroom.  Over the next year, we can start to populate the various ceiling fans in the house. I figure that we can do this whenever we need to replace the incandescent bulbs that are currently installed. Given our current pace of replacement, I’m figuring that it will take a year or so to retrofit the house.

After getting everything installed, I started to make an inventory of our various smart home investments. As of today, we have the following pieces:

Current “On-Premises” Infrastructure

Today, we have so many physical (and logical) pieces in our home automation puzzle:

  • Network: Cisco network switch, Cisco VPN appliance, Netgear router, NordVPN proxy, Raspberry Pi ad blocking, Raspberry Pi DNS
  • Print: Hewlett-Packard printer
  • Entertainment: Plex media server (on PC desktop), Roku media player, Samsung TV, Silicon Dust HDHomeRun player
  • Storage: Synology storage, WD MyCloud storage
  • IoT: Amazon Echo Dot speakers, Huawei sensor/camera (on surplus phone), Kia Soul, Personal location / presence (on personal phones), Philips Hue lights, Raspberry Pi home automation appliance, TP-Link Kasa switches, WeightGURUS scale

Current “Off-Premises” Services

While we have lots of smart pieces in the house, we also have more than a few external cloud services providers. In most of these cases, these services allow us to extend “access” beyond the confines of our network. Our current list of services includes:

  • Lobostrategies Business: Bluehost, GoDaddy
  • Olsen Personal: Amazon Alexa, Dropbox, Google Drive, Google GMail, Home Assistant cloud, IFTTT cloud, Plex cloud, Pushbullet cloud, TP-Link Kasa cloud, WD MyCloud

So after adding yet another home automation “category” to the premises, we learned an important lesson: external access requires a measure of trust – and diligence. If you aren’t willing to secure your devices, then you must accept the consequences of an electronic intrusion.

Application Security: Yet Another Acronym as a Service (YAAaaS)

Over the past dozen or so years, we have seen the emergence of Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). In fact, there are dozens of “as a Service” acronyms. All of these terms have sprung from the service-oriented architecture (SOA) movement of the nineties. These days, I think of the ‘aaS’ acronyms as ‘table stakes’ in the competitive world of IT. You can think of them as ‘value containers’ where data and process are combined into a ‘service’ that you can purchase in the marketplace. Today, almost anything can be purchased “as a service” – including application security.

The Push Against Commoditization

I sometime think of IT as a cathedral where the priests are consulted, birds are sacrificed, censers are set on fire, and tribute is paid to the acolytes and the priests. [Note: The notion of IT priests is not new. Eric Raymond wrote about it in “The Cathedral and the Bazaar” (a.k.a., CatB). For those that are part of the ecclesiastical hierarchy (i.e., the tech elites), the priesthood is quite profitable. And for them, there is little incentive to commoditize the process of IT.

In the nineties, the process of IT delivery required 8A consultants – and legions of IT staffers. The final result of this kind of expensive IT is commodity IT. Indeed, the entire migration towards outsourcing was a response (by the business) to inflexible and expensive IT. Because of this, IT has been locked in a struggle against the inevitable. As more individuals have gotten into the IT business, prices have dropped – sometimes calamitously. Consequently, IT has kept the wheel spinning by creating newer and better “architectures” that can (ostensibly) propel IT technology and services ever forward.

The Inexorable Victory of Commoditization

We are now starting to see the ‘aaS’ movement move toward higher-order functions. In the past, IT commoditized the widgets (like systems, storage, and networks). Recently, IT has transformed its own business through service management, streamlined development, and continuous process improvement. Now, businesses (and IT) are commoditizing more complex things – like applications. This includes communications (email and collaboration), sales force (e.g., SAP), procurement (e.g., SAP, Oracle, etc), operations management, service management (i.e., service desks), and even strategic planning (through data mining, business intelligence, and “Big Data” initiatives).

And today, even services such as data security, identity management, and privacy are being transformed on the altar of commoditization. In the enterprise space, you can buy appliances for DNS, for identity management, for proxy services, for firewalls, and for content management (like ad blocking and virus/malware detection). You can even go into a Best Buy and purchase the Disney Circle to ensure that your kids are safe at home.

Security and Application Security

The infrastructure components of enterprise security have been commoditized for almost two decades. And if you knew where to look, you might have found personal items (like Yubikeys) as a means of performing two-factor authentication. But now, Google is going to sell security tokens. [Note: This is just like their entry into video streaming market with the Chromecast.] This marks the point where identity management is becoming a commodity.

At the same time, security services themselves are being commoditized. In particular, you can now deploy security systems in your house without needing any security certification (i.e., Security+, CISSP, etc). You can buy cameras, motion detectors, door/window sensors, and alarm systems either with or without contracts. The new guys on the block (e.g., SimpliSafe) and the “big boys” (like Comcast) are all getting into the business of monitoring your household – and ensuring your security.

As for me, I’ve been plugging all sorts of new application-layer security features into my infrastructure.  I added DNS security to my infrastructure through using a third-party service (i.e., Cloudflare). I implemented identity management capabilities on my site. I’ve tested and deployed two-factor authentication. And I’ve added CAPTCHA capabilities for logins, comments, and contact requests. For lack of a better term, I’m calling all of this Application Security as a Service (i.e., ASaaS).

Bottom Line

I’m not doing anything new. Indeed, these kinds of things have been part of enterprise IT for years. But as a business owner/operator, I can now just plug these things into an owned (or leased) infrastructure. I don’t need a horde of minions to build all of this. Instead, I can build security into my business by simply plugging the right application security elements into my site.

Obviously this is not yet idiot proof. There is still a place for “integrators” who can stitch everything together. But with every passing day, I feel even more like my wife – who is a quilter. Architects design and use ‘patterns’ in order to construct the final product. The supply chain team buys commodity components (like the batting and the backing). Developers then cut out the pieces that make the quilt. Integrators then stitch these together – along with the commodity components.  IT takes the disassembled pieces to someone else who can machine “quilt” everything together. In the end, the “quilt” (i.e., the finished product) can be completed at a tremendously reduced price.

Ain’t commoditization grand?!

Security Theater at Black Hat 2018

implantible-devices-security-theater
Wireless Security Theater

Security is a serious business. And revealing unknown flaws can make or break people – and companies. This is especially true in the healthcare industry. As more health issues are being solved through the use of  implantable technologies, security issues will become even more important. But when do “announcements” of implant vulnerabilities go from reasonable disclosure to security theater?

When my wife sent me a link to a CNBC article entitled “Security researchers say they can hack Medtronic pacemakers”, I took notice. As posted previously, I have been a cyborg since July 2002. And in 2010, I received a replacement implant. At the time, I wondered whether (of if) these devices might be hacked. After all these devices could be programmed over-the-air (OTA). Fortunately, their wireless range was (and still is) extremely limited. Indeed, it is fair to say that these devices have only “near-field communications” capability. So unless someone could get close to a patient, the possibility of a wireless attack is quite limited.

But as technology has advanced, so too have the threats of exploitation. Given recent technology advances, there was a fair chance that my device could be hacked in the same way that NFC chips in a mobile phone can be hacked. In fact, when I cross-referenced the CNBC article with other articles, I saw a picture of the very same programmer that my cardiologist uses for me. It was the vert same picture (from Medtronics) that I had posted on my personal blog over eight years ago. So as I opened the link from my wife, my heart was probably beating just a little more quickly. But I was relieved to see that CNBC was guilty of succumbing to the security theater that is Black Hat Vegas.

In this case, the Black Hat demonstrators had hacked a “programmer” (i.e., a really fancy laptop that loads firmware to the implantable device). The demonstrators rightfully noted that if a ‘bad actor’ wanted to injure a specific person, they could hack the “programmer” that is in the doctor’s office or at the hospital. And when the electro-physiology tech (EPT) did a “device check”, the implanted device (and the patient) could be harmed.

This is not a new risk. The programmer (i.e., laptop) could have been hacked from the very start. After all, the programmer is just a laptop with medical programs running on it. It is altogether nothing fancy.

The real risk is that more and more device-assisted health treatments will emerge. And along with their benefits, these devices will come with some risks. That is true for all new technologies – whether medical or not. There is a risk of bad design, or software bugs, or poor installation, or inattention to periodic updates. And there is a risk that this technology might be exploited. Of course, the fact that a pacemaker might be subject to failure during an EMP does not mean that the device should never be used.

It’s just a risk.

Fortunately, this is no different than the countless number of risks that we take every day. We trust car designers, driving instructors, other drivers, and even the weather forecasters whenever we drive our cars. And the threat that our cars are run by computers – and can necessarily be hacked – doesn’t prevent everyone from driving. 

Let’s leave the security theater in Vegas. And let’s leave the paranoia to professionals – like Alex Jones.


Browser Security Bypasses Abound

browser security at risk
Browser Security At Risk

Browser Security Threats Discovered

According to the Catholic University of Leuven in Belgium (KU), every modern browser is susceptible to at least one method of bypassing browser security and user privacy.  In an article on the subject, Catalin Cimpanu (of BleepingComputer) reported that new (and as yet unexploited) means of bypassing cookie controls are apparently possible.  KU researchers reported their findings to the browser developers and posted their results at wholeftopenthecookierjar.eu.

Don’t expect all browser vendors to solve all browser security issues immediately. Indeed, expect many people to howl about how these vulnerabilities were reported. But regardless of the manner in which the news was delivered, every customer must take it upon themselves to implement multiple layers of protection. A comprehensive approach should (at a minimum) include:

  1.  A safe browser,
  2. Safe add-ons (or extensions) that include cookie and browser element management (e.g., uBlock Origins, NoScript, and uMatrix)
  3. A means of reducing (and possibly eliminating) Javascript, and
  4. Effective blocking of “well-known” malware domains.
Bottom Line

Shrek was right.  Ogres are like onions – and so is security. Effective security must include multiple layers. Be an ogre; use layers of security

Browser Security: Who Do You Trust?

Browser Security Defended by Mozilla

So you think that you are safe. After all, you use large, complex, and unique passwords everywhere. You employ a strong password safe/vault to make sure that your passwords are “strong” – and that they are safe. At the same time, you rely upon multi-factor authentication to prove that you are who you say that you are. Similarly, you use a virtual private network (VPN) whenever you connect to an unknown network. Finally, you are confident in your browser security since you use the “safest” browser on the market.

Background

Historically, geeks and security wonks have preferred Mozilla Firefox. That’s not just because it is open source. After all Google Chrome is open source. It’s because Firefox has a well-deserved reputation for building a browser that is divorced from an advertising-based revenue stream. Basically, Firefox is not trying to monetize the browser. Unlike Chrome (Google) and Edge (Microsoft), Firefox doesn’t have an advertising network that must be “preferred” in the browser. Nor does Firefox need to support ‘big players’ because they are part of a business arrangement. Consequently, Firefox has earned its reputation for protecting your privacy.

But as Robert “Bobby” Hood has noted, the browser that you choose may not make much difference in your browser security posture. He wrote more bluntly; he said, “[Browser difference] …doesn’t matter as much as you may think… Is it important which browser we use? Sure, but with a caveat. Our behavior is far more important than nitpicking security features and vulnerabilities.” He is right. There are far more effective means of improving security and ensuring privacy. And the most important things are your personal practices. Bobby said it best: “Would you park your Maserati in a bad part of town and say, ‘It’s okay. The doors are locked!’ No. Because door locks and alarm systems don’t matter if you do dumb things with your car.”

What Have You Done For Me Lately?

It is always good to see when one of the browser creators takes positive steps to improve the security of their product. On August 16th, Catalin Cimpanu highlighted the recent (and extraordinary) steps taken by Mozilla. In his article on BleepingComputer (entitled “Mozilla Removes 23 Firefox Add-Ons That Snooped on Users”), he highlighted the extraordinary steps take by Mozilla’s addons.mozilla.org (AMO) team. In particular, they researched hundreds of add-ons and they determined that twenty-three (23) of them needed to be eliminated from AMO. Mozilla removed the following browser plugins from AMO [Note: These include (but aren’t limited to…]:

  • Web Security
  • Browser Security
  • Browser Privacy
  • Browser Safety
  • YouTube Download & Adblocker Smarttube
  • Popup-Blocker
  • Facebook Bookmark Manager
  • Facebook Video Downloader
  • YouTube MP3 Converter & Download
  • Simply Search
  • Smarttube – Extreme
  • Self Destroying Cookies
  • Popup Blocker Pro
  • YouTube – Adblock
  • Auto Destroy Cookies
  • Amazon Quick Search
  • YouTube Adblocker
  • Video Downloader
  • Google NoTrack
  • Quick AMZ

Mozilla also took the extraordinary step of ‘disabling’ these add-ons for users who had already installed them. While I might quibble with such an ‘authoritarian’ practice, I totally understand why Mozilla took all of these actions. Indeed, you could argue that these steps are no different than the steps that Apple has taken to secure its App Store.

Bottom Line

In the final analysis, browser security is determined by the operation of the entire ecosystem. And since very few of us put a sniffer on the network whenever we install a plugin, we are forced to “trust” that these add-ons perform as documented. So if your overall browser security is based upon trust, then who do you trust to keep your systems secure? Will you trust companies that have a keen interest in securing ‘good’ data from you and your systems? Or will you trust someone who has no such vested interests?

DNS Security: The Final Chapter, For Now

DNS Security Challenges
DNS Security Challenges

As a man of faith, I am often confronted with one sorry truth: my desires often exceed my patience. So it was with my extended DNS security project. I have written three out of four articles about DNS security. But I have taken a detour from my original plan.

The first article that I wrote outlined the merits of using the Trusted Recursive Resolver that showed up in Firefox 61. I concluded that the merits of encrypting DNS payloads were obvious and the investment was a necessary one – if you want to ensure privacy. The second article outlined the merits (and methods) of using DNS -Over-HTTPS (DoH) to secure references to external referrers/resolvers. In the third article, I outlined how I altered my DNS/DHCP infrastructure to exploit DNSMasqd.

That left me with the final installment. And the original plan was to outline how I had implemented Unbound as a final means of meeting all of my DNS security requirements. Basically, I had to outline why I would want something other than a simple DNS referral agent. That is a great question. But to answer that question, I need to provide a little background.

DNS Background

The basic DNS infrastructure is a hierarchical data structure that is traversed from top to bottom (or right to left when reading a domain name). When a customer wants to know the IP address of a particular device, the top-level domain (TLD) is queried first. So if looking for www.lobostrategies.com, one must first search the registry for all ‘.com’ domains. The authoritative server for ‘.com’ domains contains a reference to the authoritative DNS server for lobostrategies.com (i.e., GoDaddy).

The next step is to search the authoritative domain server for the address of the specific server. In my case, GoDaddy would be queried to determine the address for www.lobostrategies.com. GoDaddy would either answer the question or send it to a DNS server supporting the next lower level of the domain hierarchy. Since there are no subdomains (for lobostrategies.com), GoDaddy returns the IP address.

The ISP Advantage

The process of searching from the root to the proper branch (that answers the query) is called recursive searching. And it is the heart of how DNS works. But this burden is not carried by every user. Can you imagine if every user queried the top-level domain servers? It would be an incredible volume of queries. Instead, the results of most queries are stored (i.e., cached) at lower levels of the tree. For example, companies like your cable ISP (or Google, or Cloudflare, or OpenDNS) will be your ‘proxy’ for all requests between you and the host name that you want to resolve into an IP address.

Your ISP has almost every result of top-level domain queries already stored in its cache. So your answer would be delivered with at least one fewer step than it would have required for you to ask the question yourself. And since most public DNS resolvers have massive results already cached, you would never have to go to GoDaddy to get the IP address for my website. So rather than issuing a query to the root and a query to GoDaddy, your ISP can just provide the address directly to you – reducing your name search activity in half. Therefore, most users consult a DNS service that does the searching for them.

Hidden Costs

But think about what it is costing you to do the search yourself. The first time you query a domain, it takes time (and cache memory). But after the one-time ‘charges’ are paid, isn’t running my own recursive DNS a low-cost investment? Yes it is, and no it isn’t. The real cost of DNS is the hidden cost of privacy. If you run your own recursive DNS server, then you have to pay the entry costs (in hardware and in slower initial resolve times).

If you ‘trust’ someone else to do this for you, then they have access to all of your DNS queries. They know who you are and what you are trying to see. They won’t know what you saw in your query to a specific site. But they will know that you wanted to know where a particular site could be found.

Bottom line: To use a DNS resolver/referrer, you are necessarily letting that service provider know about your probably website destinations.

By using a recursive DNS, you are only letting the domain owner for a site know that you are looking for their address. Google would only get query data when you were intending to connect to Google services. So Google would only see a subset of your DNS queries – thereby improving DNS security.

On the flip side, you really do want a service that will encrypt the DNS payload. Recursive DNS tools (like the Unbound tool in DD-WRT and Pi-hole) do not yet support robust encryption for their recursive queries. Indeed, only two DNS providers currently support DOH (i.e., Google and Cloudflare). By selecting to use a recursive DNS that you manage yourself, you are limiting the ability to mask DNS search requests as they move across the Internet. In practice, this means that you will have a higher risk of being exploited by a man-in-the-middle (MIM) DNS attack. And this includes things like DNS spoofing.

The Choice

So I was faced with a simple choice: 1) I could implement a solution with encryption to a trusted recursive DNS provider, or 2) I could pay the upfront price of running my own recursive DNS. When I started to write this series of articles, I was feeling very distrustful of all DNS providers. So I was leaning towards running my own recursive DNS and limiting the search data that my selected provider could exploit. But the more that I thought about it, the more that I questioned that decision. Yes, I don’t trust companies to place me above their bottom line. And I don’t want the ‘gubmint’ to have a choke point that they could exploit. After all, didn’t the 2016 presidential campaign demonstrate that both parties want to weaponize the information technology?

But the fear of all companies and all politicians is a paranoid conceit. And I don’t want to be the paranoid old man who is always watching over his shoulder. More importantly, the real challenge / threat is the proven risk that script-kiddies, hackers, and criminals might target my data while it is in transit. So as I compared a paranoid fear versus a real fear, I started moving towards desiring encrypted DNS queries more than limiting third-party knowledge of my queries.

The Choice Deferred

Just as I was about to implement changes based upon a re-assessment, I inadvertently shot myself in the foot. I was listening to a podcast about information security (i.e., Security Now by Steve Gibson) and I heard about a resurgence of router-based password exploits. I had long ago switched to a password manager. So I wanted more entropy in my randomized password. I checked online to see if there were any password standards for DD-WRT. I found nothing. So I figured that if the software didn’t like a password, then it would stop me before implementing the change.

I plunged ahead and created a 64-character randomized password. The software took the change – even validating the password against a second-entry of the password. But when I went to log back in to the router, my shiny new password was rejected.

Wash, Rinse, Repeat

I was getting frustrated. I looked to see if there was any way to revert back to an older password. But there was no such capability. And the only way to log back into my router would be to factory-reset the device – which I did. But it took a very long-time to recover (~2.5 hours). So after a few hours, I was back to where I started.

Before I tried again, I backed up the non-volatile memory (nvram). Then I decided to try a shorter password. After failing with the 64-character password, I tried a 32-character password. Unfortunately, it resulted in an inaccessible router. So I restored my router and then I was back to the starting line, again.

After researching the issue for forty-five minutes, I found someone that had run into the same problem. They had solved it by using a twenty-two (22) character password. So I earnestly tried to change the password to an eighteen (18) character password. I was hopeful; my hopes were crushed. But I was getting good at doing the factory reset. So after three attempts and almost five (5) hours of effort, I went back to the old password format that I had used before. Lo and behold, it worked.

Overall DNS Security Improvements

After spending the bulk of an evening on this effort, I was glad to be back on track. But I had a fresh appreciation for doing as little as possible to my router and my DNS infrastructure. I already has a working DNS that used DOH to communicate with Cloudflare. And I had done enough research to be less skeptical of the Cloudflare DNS (when compared to ISP DNS and Google DNS).

I now have a DNS service separated from my router. And the DNS and DHCP systems are running on a unified system – thus making reverse-lookups and local queries far more effective. Finally, I have a DNS request facility that should be more secure against man-in-the-middle attacks. So without much more fanfare, I will call this DNS security battle a victory – for now. And I will keep my eyes wide open – both against corporate/government exploitation and against self-inflicted computing wounds!

Router Security: Another One Bites The Dust

router security cryptojacking
Poor Router Security Assists Cryptojacking
Hackers are often successful because their victims are not very vigilant.

One thing that hackers have learned is that most people don’t update the software on their devices. This includes users failing to implement fixes that improve router security and close router vulnerabilities. Last week, we learned of yet another example where hackers exploited a ‘solved’ vulnerability to inject malware onto systems. In this case, bad actors were using MicroTik routers as a means of spreading the Coinhive malware.

But as is the case these days, the malware did not just exploit inadequate router security practices. The malware used the compromised routers to re-write web pages in order to propagate the coin mining software to unsuspecting sites/users. As dreadful as this sounds, MicroTik had a patch for this after their last serious exploit. It’s too bad that the patch was never pushed to their customers’ devices. if this had been done automatically (or if the customers had done it for themselves), then there would never have been the most recent Coinhive exploit.

Similarly, if the users had ensured secure connections to all web sites (by using https), then there is a good chance that the compromised sites (and connections to other distrusted sites) might have been noticed. In addition, if users had blocked active scripting from within their browsers, then Coinhive would never have gained a foothold.

The solutions to these problems are relatively simple.

Take These Steps:

  1. If your router supports automatic updating, then activate that feature. Don’t wait to turn automatic updating on. Do it now!
  2. Always use https when accessing any web site. This would have hindered the propagation of the infection. The Electronic Frontier Foundation (EFF) has a good tool to ensure this called HTTPS Everywhere.
  3. Disable scripting for all sites EXCEPT those that you trust. You can do this by using tools like NoScript or uMatrix. 

Finally, we are now aware that routers are a common vector for hackers to exploit. That’s because everyone has a router and very few routers have automatic updating capabilities. Knowing that very few people take the time to update their own routers, most router vendors should require automatic updates – unless de-activated by the user. By the way, this is what Microsoft finally did to address some dramatic security weaknesses in the Windows operating system.

Don’t rest upon your past efforts to protect your assets. And whatever you do, don’t be the slowest gazelle. Update your infrastructure.

DNS Security At The Edge

DNS-Security-From-The-Edge
DNS Security From The Edge

In my third installment of this series about DNS security, I am focusing on how we can audit all DNS requests made in our network.  In the second installment, I focused on secure communications between DNS servers. And I highlighted the value of DNSSEC and the potential of DNS-Over-HTTPS (DOH). I outlined some of the changes that we’ve made – including the deployment of DNSMasqd on our router and on our Pi-hole. Today, I will focus upon ensuring that end-to-end auditing is possible – if desired.

As noted previously, we upgraded our router (i.e., a Netgear R8000 X6) from stock firmware to DD-WRT. We did this for numerous reasons. Chief among these reasons was the ability to set up an OpenVPN tunnel from our network out to our VPN providers’ VPN endpoints. But a close second was a desire to run DNSMasqd on the router. Since we haven’t chosen to move DHCP functions to the Pi, we wanted a DHCP service that would have better capabilities. More importantly, we wanted to update DHCP option #6 so that we could control what name servers our clients would use. In our case, we knew that all clients would use our Pi-hole as their name server.

After we realized that DNS options on the admin panel didn’t apply to DHCP clients, we figured out how to set name servers for all of our DHCP clients. Once done, all clients began direct interaction with the Pi-hole. [They had previously used the router as a DNS proxy to our Pi-hole system.]

But as is often the case, there were unforeseen consequences. Specifically, reverse lookups (for static address) failed. This meant that DNS security would suffer because we couldn’t correlate elements across the entire request chain. We could have moved dhcpd to the Pi-hole. But we wanted to have a DNS fall-back – just in case the Pi-hole failed. So we changed our processes for assigning static addresses. Now, we will add them to the router as well as to the /etc/hosts file on the Pi-hole. Once implemented, we had clear visibility between request origination and request fulfillment. [Note: We will probably turn this off as it defeats the very anonymity that we are trying to establish. But that is for another day.]

So what’s left?  In the first two articles, we set up DNS authentication wherever possible. We also established DNS payload encryption – using DNS-Over-HTTPS. Now we have a means of authenticating DNS server communications. And we have an encrypted payload. But there is one last need: we have to limit the ‘data at rest’ stored by each DNS resolver.

Consider this: If we validate our connection to Google, and we encrypt the traffic between us, then Google can still look at all of the DNS queries that it receives from us. That is fine if you trust your principal DNS resolver. But what if you don’t? A more secure process would be to ask for name resolution directly, rather than through a trusted (or un-trusted) intermediary. In my next article, I will discuss how we implemented a recursive DNS resolver alongside our Pi-hole. Specifically, I’ll talk about using Unbound to limit the amount of ‘data at rest’ that we leave with any single DNS service.

DNS Security Is A Necessary Key To Privacy

DNS Security Is Not Too Expensive
Comparison of DNS Services


Yesterday, I wrote about how Mozilla is updating Firefox to improve DNS security. But my conclusion was that Mozilla may have designed a system with a single point of failure. In fairness, this assessment was far too simplistic. Today, I want to amplify on my thoughts.

The Firefox implementation assumes something that is probably false. It assumes that most people are using Firefox. It also assumes that all Firefox users can choose an appropriate resolver/referrer to meet their specific needs. I would argue that the first assumption is patently wrong while the second assumption is altogether naive. As noted yesterday, I would have much preferred that Mozilla be more transparent about their proposed changes. Also, rather than assume that the code is open and thus reviewed, Mozilla could have asked for more extensive input. [Note: I suspect that Mozilla was transparent with a somewhat limited community. I just wish that their design had been explicitly shared.]

The real problem that I have with their approach is that it is a Firefox-only solution. No, I don’t expect Mozilla to solve this problem for their competitors. But most organizations need a broader solution that will meet everyone’s needs. Enter dnsmasq. In my case, I have implemented DNS over HTTPS (DoH) as my mechanism to improve DNS security. 

I am quite fortunate: I am using a DNS server that was just updated. The Pi-hole team has just released Pi-Hole 4.0. And a DNS service provider (Cloudflare) has released an AMD and ARM DNS-over-HTTPS implementation. [Note: Their solution is in binary form. So I will add my voice to the chorus asking for the software to be published under a free/open license. Until that happens, I’m testing the system to see how it performs.]

So what am I seeing thus far?

After switching to cloudflared on Pi-hole 4.0, I ran some benchmarks. And as expected, there was quite a bit more activity on my DNS server. But the overall DNS response time (i.e., the server at 10.42.222.22) was quite acceptable. I want to get about a week’s worth of data. But at this moment, I am very hopeful that the software will maintain acceptable performance levels.

So what should you do? If you’re bold, then use a test system and try it out for yourself. Or keep close tabs on folks who are testing this technology. At the current time, I am thrilled to close down yet another vector of surveillance.
 

TRR = Totally Risky Referrer

Totally-Risky-Resolver
TRR Is Anything But Trusted

When Homer Simpson says “doh”, you know that something stupid is about to happen. Unfortunately, I believe that the same thing is true about the upcoming Firefox feature called DNS over HTTPS (i.e., DOH). Developers at Firefox noted a real problem: DNS queries aren’t secure. This has been axiomatic for years. That’s why DNS developers created DNSSEC. But DNSSEC is taking forever to roll out. Consequently, the Firefox developers baked Trusted Recursive Resolver (TRR) into Firefox 61. [Note: TRR has been available since Firefox 60. But TRR will move from an experiment to a reality as Firefox 61 rolls out across the Internet.]

Background

One of the key design points of TRR is  the encapsulation of data in a secure transport mechanism. Theoretically, this will limit man-in-the-middle attacks that could compromise your browsing history (or redirect your browser altogether). Of course, theory is not always reality. Yes, SSL/TLS is more secure than plain text. But it is widely used. So it is burdened by the need to retain backward-compatibility. Nevertheless, it is more secure than plain text. And security conscious consumers can implement TRR even if their local DNS provider doesn’t currently offer DNSSEC.

Risk

So why is TRR so risky? That’s simple: Mozilla is implementing TRR with a single recommended resolver: Cloudflare. I don’t think that anyone has an axe to grind with Cloudflare. From all that I have read, Cloudflare has never used customer data exclusively for its own benefit. That’s not true for Google, or OpenDNS, or a lot of other DNS providers. Of course, Cloudflare is a relative newcomer. So their track record is limited. But the real issue is that Mozilla has designed a system with a single point of failure – and a single choke point for logging and control.

Mitigation

Fortunately, Mozilla has enabled changing the TRR mode and TRR URI. Unfortunately, it is currently managed only through the about:config interface. That’s fine for a technician. But it is a dreadful method for end users. I am hopeful that Mozilla will provide a better interface for users. And I certainly hope that it is implemented on an “opt-in” basis. If they don’t, then folks who use their own DNS (e.g., every Pi-hole user) or folks who specify a different public provider than Cloudflare (e.g., Google, OpenDNS, DNS.Watch, etc) will be forced to “touch” every workstation.

Bottom Line:

Is Firefox acting badly? Probably not. After all, they are trying to close a huge DNS hole that infrastructure providers have yet to close (i.e., DNSSEC). Nonetheless, their approach is ham-handed. Mozilla needs to be transparent with the why’s and when’s – and they need to trust their users to “do the right thing.”