"Secure Introduction of Internet-Connected Things" (was Re: [webappsec] Agenda for MONDAY Teleconference 2014-10-20, 12:00 PDT)

# Mike West (a year ago)

Forking the thread, adding felt@ and palmer@, as they will surely have opinions.

I like the direction, FWIW, but I haven't at all thought through the problem. Chris has, at least a bit: noncombatant.org/2014/10/12/brainstorming-security-for-the-internet-of-things is worth reading.

-mike

-- Mike West mkwst@google.com

Google+: mkw.st+, Twitter: @mikewest, Cell: +49 162 10 255 91

Google Germany GmbH, Dienerstrasse 12, 80331 München, Germany Registergericht und -nummer: Hamburg, HRB 86891 Sitz der Gesellschaft: Hamburg Geschäftsführer: Graham Law, Christine Elizabeth Flores (Sorry; I'm legally required to add this exciting detail to emails. Bleh.)

Contact us to advertise here
# Chris Palmer (a year ago)

On Mon, Oct 20, 2014 at 10:24 AM, Mike West mkwst@google.com wrote:

I think it is a bad idea to have users go through the ordinary "untrusted certificate" or "unknown authority" flows in the browser to use these devices, because it trains users to ignore these warnings and puts back pressure on UA authors who want to make these experiences increasingly strict.

I call that the Dual-Mode Client solution in my post. I agree that we can't, even for 1 more minute, torture users with the Public Internet Mode for private networks. It's awful, and Adrienne and I end up closing comments on the bug reports due to the swearing: :|

Although I say so only obliquely, I am really against the custom-client solution. Not because I am a web platform evangelist, but because:

  • Vendors already face incredible cost pressure, and quality is abysmal as noted in the post. Multiplying client platforms multiplies that problem.

  • Vendors are likely to reduce the multiplier by abandoning certain client platforms. "Oh, yeah, you'll need an iOS device to manage your refrigerator." No.

  • We know these devices get abandoned anyway. An open REST API, or even just a scrapable set of pages, allows the open community to pick up development when the vendor leaves town. It's much easier to reverse-engineer with Fiddler than with a custom Wireshark module.

  • I'm really bitter about that kernel module I had to install for the file server appliance. :) Kernel modules should be absolutely beyond the pale, for anything. But custom clients make it seem more acceptable, to weird vendors...

The idea I was tossing around would be to have some different kind of secure introduction ceremony to replace the untrusted certificate dialog for hosts on the local network only. Perhaps something like Bluetooth / WPS pairing, where the user could get a page that tells them this is a locally connected device and they have to enter a pairing code to trust it, with other-than-standard HTTPS UX treatment following, but less strict rules about mixed content blocking, etc. than an untrusted or HTTP connection would receive.

Yes.

There are a number of moving parts involved to get this right:

  • definitely UI, which the W3C doesn't have a great history in, but perhaps which we can describe the requirements for without prescriptively specifying
  • thinking about what constitutes a "locally attached network device", how to detect and verify that, and how to manage subsequent accesses over a WAN
  • some Fetch rules similar to Mixed Content
  • perhaps a certificate extension to identify these devices

I'd really, really like to keep any spec work minimal. The sketch of when to go into IoT Mode I give in the post is likely sufficient, at least for a start. And I definitely don't think tht secure UX design by committee is going to work.

# Brad Hill (a year ago)

Sure, but I think that if we want to produce something that actually works, it will be of great help to have a specified, agreed-upon standard so as to motivate device manufacturers to do something which they can know will work in any browser on every platform. It can be a very small spec with no pictures, I think.

# Adrienne Porter Felt (a year ago)

Does it make sense for this to begin as spec work and then get pushed to browsers, or should we encourage each browser vendor to start working on the problem? Then once there are reasonable ideas out there, we can start working on specs and moving forwards an agreed-upon standard.

# Chris Palmer (a year ago)

On Mon, Oct 20, 2014 at 11:53 AM, Adrienne Porter Felt felt@google.com wrote:

Does it make sense for this to begin as spec work and then get pushed to browsers, or should we encourage each browser vendor to start working on the problem? Then once there are reasonable ideas out there, we can start working on specs and moving forwards an agreed-upon standard.

The latter. Specifications must come after trial-and-error implementation. Otherwise you get design by committee.

# Brad Hill (a year ago)

sure, either way works, but if we want it in the charter scope for WebAppSec to work on it in say the coming year, we need to decide on that pretty soon.

# Brad Hill (a year ago)

Or you get ideas from a bigger set of stakeholders and experts than any one company might have. :)

# Chris Palmer (a year ago)

On Mon, Oct 20, 2014 at 12:05 PM, Brad Hill hillbrad@gmail.com wrote:

Or you get ideas from a bigger set of stakeholders and experts than any one company might have. :)

If they are actually stakeholders and experts, sure.

# Mike West (a year ago)

On Mon, Oct 20, 2014 at 9:09 PM, Chris Palmer palmer@google.com wrote:

On Mon, Oct 20, 2014 at 12:05 PM, Brad Hill hillbrad@gmail.com wrote:

Or you get ideas from a bigger set of stakeholders and experts than any one company might have. :)

If they are actually stakeholders and experts, sure.

The IETF has made a cynic of you.

# Ángel González (a year ago)

Chris Palmer wrote:

On the bright side, it’s very good that the machine generates a fresh key every time you re-enable HTTPS: That means that the key is not static, or identical on all the routers of the same make or model.

That still doesn't mean there won't be duplicate keys. It is hard for a device to have good entropy after a reset. [1] Did you try resetting it a few times and comparing the generated keys?

And obviously things like WiFi keys MUST NOT be produced by jumbling around the SSID and a couple of public bytes. Even "1234" would be preferable.

Vendors SHOULD be adding good random seeds onto device memory at factory.

CSRF problems

Although it's a task to be solved by router vendors, browsers can help mitigate this [2] [3]

if a device is only marketable if its price point is so low that it cannot be secure, perhaps it should disable itself after some reasonable life-time

-1

Users will perceive it as planned obsolescence for business reasons, and I wouldn't be surprised if producers treated it like that, too.

On Mon, Oct 20, 2014 at 10:24 AM, Mike West mkwst@google.com wrote:

I think it is a bad idea to have users go through the ordinary "untrusted certificate" or "unknown authority" flows in the browser to use these devices, because it trains users to ignore these warnings and puts back pressure on UA authors who want to make these experiences increasingly strict.

I call that the Dual-Mode Client solution in my post. I agree that we can't, even for 1 more minute, torture users with the Public Internet Mode for private networks. It's awful, and Adrienne and I end up closing comments on the bug reports due to the swearing: :|

The idea I was tossing around would be to have some different kind of secure introduction ceremony to replace the untrusted certificate dialog for hosts on the local network only. Perhaps something like Bluetooth / WPS pairing, where the user could get a page that tells them this is a locally connected device and they have to enter a pairing code to trust it, with other-than-standard HTTPS UX treatment following, but less strict rules about mixed content blocking, etc. than an untrusted or HTTP connection would receive.

Yes.

+1

I feel it may be possible to combine this "secure introduction at local network" to also help with the https error because you don't have an internet exit (maybe it's showing a wifi paywall, or simply "I'm still establishing ppp connection"). Although that would require a .

Since many of those routers are also the DHCP and DNS server (and sometimes intercept dns queries). There could be a procedure similar to DANE, such as DHCP providing an anchor for *.local

If we assume that the rightful owner will have connected to the route before an attacker could tamper it, there would be the issue of an attacker-induced disconnection, so the anchor could be stored with the connection settings (easy for WiFi, match with the SSID. Not so much for ethernet, but it is much harder to win a race condition with the router DHCP server, if you are wired to it).

1- factorable.net/weakkeys12.conference.pdf 2- bugzilla.mozilla.org/show_bug.cgi?id=354493 (2006 bug) 3- code.google.com/p/chromium/issues/detail?id=378566

# Chris Palmer (a year ago)

On Tue, Oct 21, 2014 at 3:57 PM, Ángel González angel@16bits.net wrote:

On the bright side, it’s very good that the machine generates a fresh key every time you re-enable HTTPS: That means that the key is not static, or identical on all the routers of the same make or model.

That still doesn't mean there won't be duplicate keys. It is hard for a device to have good entropy after a reset. [1] Did you try resetting it a few times and comparing the generated keys?

I know. No, I did not check; I was only concerned that the key was not hard-coded. Given the many problems with Linux' CRNG, especially but not only a device like that, I didn't bother checking any further.

if a device is only marketable if its price point is so low that it cannot be secure, perhaps it should disable itself after some reasonable life-time

-1

Users will perceive it as planned obsolescence for business reasons, and I wouldn't be surprised if producers treated it like that, too.

Otherwise it's planned unsafety.

Perhaps vendors could open source their abandonware. Even then, though, most fielded devices would still be unsafe.

# Jim Manico (a year ago)

If you use an ephemeral cipher suite, which most browsers support •well• today, then the server-side SSL/TLS key will NOT be used to encrypt data, at all.

-- Jim Manico @Manicode (808) 652

# Jeffrey Walton (a year ago)

if a device is only marketable if its price point is so low that it cannot be secure, perhaps it should disable itself after some reasonable life-time

-1

Users will perceive it as planned obsolescence for business reasons, and I wouldn't be surprised if producers treated it like that, too.

Otherwise it's planned unsafety.

Perhaps vendors could open source their abandonware.

Dan geer posits the code should be seized and placed into open source.

From www.lawfareblog.com/2014/04/heartbleed-as-metaphor:

Suppliers that refuse both field upgradability and open source access to their products should be said to be in a kind of default by abandonment. Abandonment of anything else of value in our world has a regime wrapped around it that eventually allocates the abandoned car, house, bank account, or child to someone new. All of the technical and procedural fixes to the monoculture problem need that kind of backstop, viz., if you abandon a code base in common use, it will be seized. That requires a kind of escrow we’ve never had in software and digital gizmos, but if we are to recover from the fragility we are building into our “digital life,” it is time...

# David Rogers (a year ago)

...ok. Back in the real world - what you really need is to be able to have a mechanism to reliably identify the device and therefore be able to take a decision on whether it is insecure for whatever reason. Abandonment is going to happen anyway (I've seen plenty of open source projects abandoned too!). If it is critically insecure there are effective mechanisms that have worked in the browser world (for example blocking IE6 on websites) to stop it accessing the internet and that change user behaviour in a good way.

We're working on a number of IoT projects at the moment. Secure enrolment / commissioning is difficult and I would be careful about assuming that the user can physically enrol each device manually. I don't think this is practical beyond the home context (which is just one IoT market) and it certainly doesn't scale. You also can't definitely say that the user owns or has access to the gateway. The key storage issue is definitely a difficult issue - the devices are mostly in physically accessible 'hostile' environments and as someone else has said, I don't think we're going to have secure storage or a TEE on the lowest common denominator devices.

In our projects in the mobile world, there are some solutions available that make things a bit easier, but they come at a price compared to the hub and spoke ISM or 802.15.4 -> router model devices.

Having some sort of suicide pill for a device is dangerous from a security perspective and isn't acceptable for purchasers.

# Chris Palmer (a year ago)

On Wed, Oct 22, 2014 at 2:30 AM, David Rogers

david.rogers@copperhorse.co.uk wrote:

...ok. Back in the real world - what you really need is to be able to have a mechanism to reliably identify the device and therefore be able to take a decision on whether it is insecure for whatever reason. Abandonment is going to happen anyway (I've seen plenty of open source projects abandoned too!). If it is critically insecure there are effective mechanisms that have worked in the browser world (for example blocking IE6 on websites) to stop it accessing the internet and that change user behaviour in a good way.

Yeah, about IE 6 (and Windows XP < SP3)... those are great examples of things that should have gone away a long time ago, and support the argument in favor of device self-euthanasia. Precisely because we are saddled with them, we can't move forward: no SNI, no SHA-256 for certificate signatures... Not that it matters, since Microsoft has not been patching (or even able to patch) many, many vulnerabilities in a product that old. (Many patches make it into >= 7, or even >=8, only.

And I don't blame MS one bit for that.)

Even just dropping all support for SSL v3 (now fully, entirely dead) and RC4 is not a decision that a sane person takes lightly. And those things are older than IE6!

Having some sort of suicide pill for a device is dangerous from a security perspective and isn't acceptable for purchasers.

Enjoy your RC4 in 2025, then. :)

Look, we all agree that self-euthanasia (or, less drastically, self-capability-reduction) is not ideal. But the alternative is a commitment to fully support devices for 10+ years. I'd love it if everyone did that.

# David Rogers (a year ago)

See [DAVID]:

Want more features?

Request early access to our private beta of readable email premium.