Improving CORS security

# James Kettle (11 days ago)

After encountering a significant number of websites that have serious vulnerabilities due to their use of CORS, I'd like to float a few ideas for improvements to the CORS specification and its implementation in browsers.

Due to limitations in the CORS spec/implementation, numerous websites are forced to dynamically generate the access-control-allow-origin header based on the incoming Origin header. This creates a situation where websites have security-critical functionality based on parsing user input. We could reduce the number of sites forced to do dynamic generation by:

  • Enabling static trust of multiple origins by supporting a space-separated list of origins
  • Enabling static trust of all subdomains by supporting the use of partial wildcards like *.example.com

Trusting the 'null' origin is equivalent to trusting except it's less obviously risky, and actually more dangerous since the allow-credentials exception for doesn't apply to null. I think it may be helpful to apply the allow-credentials exception to 'null'.

Websites accessed over HTTPS can use CORS to grant credentialed access to HTTP origins, which partially nullifies their use of HTTPS. Perhaps browsers' mixed content protection should block such requests, or at least disable allow-credentials for HTTP->HTTPS requests.

I've written a longer blog post on this topic over at blog.portswigger.net/2016/10/exploiting-cors-misconfigurations-for.html and I'll be presenting on this topic at AppSec EU on Friday so feel free to say hi if you're around.

Hopefully this is the right forum to present this ideas, and I'm not five years too late.

Contact us to advertise here
# Mike West (10 days ago)

Thanks, James!

On Tue, May 9, 2017 at 5:41 PM, James Kettle james.kettle@portswigger.net

wrote:

We could reduce the number of sites forced to do dynamic generation by:

  • Enabling static trust of multiple origins by supporting a space-separated list of origins
  • Enabling static trust of all subdomains by supporting the use of partial wildcards like *.example.com

+Anne, who will have opinions.

Trusting the 'null' origin is equivalent to trusting except it's less obviously risky, and actually more dangerous since the allow-credentials exception for doesn't apply to null. I think it may be helpful to apply the allow-credentials exception to 'null'.

For clarity, you're suggesting that Access-Control-Allow-Origin: null should not be allowed if the request included credentials (in the same way that we block Access-Control-Allow-Origin: *)? I think I could get behind that, depending on usage in the wild.

Websites accessed over HTTPS can use CORS to grant credentialed access to

HTTP origins, which partially nullifies their use of HTTPS. Perhaps browsers' mixed content protection should block such requests, or at least disable allow-credentials for HTTP->HTTPS requests.

Interesting. You're suggesting that [https://example.com/](https://example.com/) should not be able to send Access-Control-Allow-Origin: [http://whatever.com](http://whatever.com)? That sounds reasonable on the one hand, but I suspect that it's widely used on the other (all (I hope) Google API endpoints are HTTPS, for example, but not all of those APIs' clients will be). I'll add some metrics to Chrome to see if that suspicion is borne out.

I've written a longer blog post on this topic over at blog.portswigger.net/2016/10/exploiting-cors- misconfigurations-for.html and I'll be presenting on this topic at AppSec EU on Friday so feel free to say hi if you're around.

Looking forward to catching the recording. :)

# Anne van Kesteren (10 days ago)

On Wed, May 10, 2017 at 12:13 PM, Mike West mkwst@google.com wrote:

On Tue, May 9, 2017 at 5:41 PM, James Kettle james.kettle@portswigger.net wrote:

  • Enabling static trust of multiple origins by supporting a space-separated list of origins

It should be comma-separated, if anything. Space has historical meanings and isn't really the way to combine multiple HTTP header values.

  • Enabling static trust of all subdomains by supporting the use of partial wildcards like *.example.com

That actually makes verification a lot harder. Currently we can just compare the byte sequences.

Trusting the 'null' origin is equivalent to trusting except it's less obviously risky, and actually more dangerous since the allow-credentials exception for doesn't apply to null. I think it may be helpful to apply the allow-credentials exception to 'null'.

For clarity, you're suggesting that Access-Control-Allow-Origin: null should not be allowed if the request included credentials (in the same way that we block Access-Control-Allow-Origin: *)? I think I could get behind that, depending on usage in the wild.

It's been an open issue for a while: fetch.spec.whatwg.org/#cors-check. Nobody ever got the data or made the change though.

Websites accessed over HTTPS can use CORS to grant credentialed access to HTTP origins, which partially nullifies their use of HTTPS. Perhaps browsers' mixed content protection should block such requests, or at least disable allow-credentials for HTTP->HTTPS requests.

Interesting. You're suggesting that [https://example.com/](https://example.com/) should not be able to send Access-Control-Allow-Origin: [http://whatever.com](http://whatever.com)? That sounds reasonable on the one hand, but I suspect that it's widely used on the other (all (I hope) Google API endpoints are HTTPS, for example, but not all of those APIs' clients will be). I'll add some metrics to Chrome to see if that suspicion is borne out.

Yeah, this would also make transitioning to HTTPS harder. Since now you can't just update your CDNs first and gradually shift your frontend. It all becomes tightly coupled.

Seems more reasonable as a new "I share with secure contexts only" feature.

# Mike West (10 days ago)

On Wed, May 10, 2017 at 12:21 PM, Anne van Kesteren annevk@annevk.nl

wrote:

On Wed, May 10, 2017 at 12:13 PM, Mike West mkwst@google.com wrote:

On Tue, May 9, 2017 at 5:41 PM, James Kettle james.kettle@portswigger.net wrote:

  • Enabling static trust of multiple origins by supporting a space-separated list of origins

It should be comma-separated, if anything. Space has historical meanings and isn't really the way to combine multiple HTTP header values.

Makes sense to me.

  • Enabling static trust of all subdomains by supporting the use of partial

    wildcards like *.example.com

That actually makes verification a lot harder. Currently we can just compare the byte sequences.

I agree, but it's not clear to me that that would be fatal, since browsers that support CSP already have code to deal with this kind of wildcard syntax.

Trusting the 'null' origin is equivalent to trusting except it's less obviously risky, and actually more dangerous since the allow-credentials exception for doesn't apply to null. I think it may be helpful to apply the allow-credentials exception to 'null'.

For clarity, you're suggesting that Access-Control-Allow-Origin: null should not be allowed if the request included credentials (in the same way that we block Access-Control-Allow-Origin: *)? I think I could get behind that, depending on usage in the wild.

It's been an open issue for a while: fetch.spec.whatwg.org/#cors-check. Nobody ever got the data or made the change though.

Websites accessed over HTTPS can use CORS to grant credentialed access to HTTP origins, which partially nullifies their use of HTTPS. Perhaps browsers' mixed content protection should block such requests, or at least disable allow-credentials for HTTP->HTTPS requests.

Interesting. You're suggesting that [https://example.com/](https://example.com/) should not be able to send Access-Control-Allow-Origin: [http://whatever.com](http://whatever.com)? That sounds reasonable on the one hand, but I suspect that it's widely used on the other (all (I hope) Google API endpoints are HTTPS, for example, but not all of those APIs' clients will be). I'll add some metrics to Chrome to see if that suspicion is borne out.

Yeah, this would also make transitioning to HTTPS harder. Since now you can't just update your CDNs first and gradually shift your frontend. It all becomes tightly coupled.

Seems more reasonable as a new "I share with secure contexts only" feature.

Adding metrics for these two in codereview.chromium.org/2873973002. I share Anne's skepticism about the latter, but let's get numbers and then argue about them.

# Anne van Kesteren (10 days ago)

On Wed, May 10, 2017 at 12:57 PM, Mike West mkwst@google.com wrote:

I agree, but it's not clear to me that that would be fatal, since browsers that support CSP already have code to deal with this kind of wildcard syntax.

Dare I ask whether that is fully interoperable? Last I checked this was defined with some ABNF which didn't inspire confidence. Also, would this result in example matching EXAMPLE whereas it does not now?

# Mike West (10 days ago)

On Wed, May 10, 2017 at 1:01 PM, Anne van Kesteren annevk@annevk.nl wrote:

On Wed, May 10, 2017 at 12:57 PM, Mike West mkwst@google.com wrote:

I agree, but it's not clear to me that that would be fatal, since browsers that support CSP already have code to deal with this kind of wildcard syntax.

Dare I ask whether that is fully interoperable?

Yup. 100%, probably. Maybe even 101%, because user agents wouldn't ship things that didn't comply to the spec!

cough

Last I checked this was defined with some ABNF which didn't inspire confidence. Also, would this result in example matching EXAMPLE whereas it does not now?

I believe that the combination of the parsing and matching algorithms in the CSP spec are pretty solid (but, really, getting more eyes on the document would be better). But my point was less "Hey, let's reuse CSP!" and more "Wildcards are a problem that's totally possible to solve if we decide that we want to solve it."

# Devdatta Akhawe (10 days ago)

That allow-credentials works with null is actually pretty useful to and lets us more easily adopt CSP/Iframe sandbox. I am pretty strongly against adding a new limitation: first because backwards compatibility is important and second because I think we should do more to encourage use of sandbox. IMO, in terms of impact, trusting null is the same bug as reflecting the origin header in Access-control-allow-origin, which is also very common on the web.

# Mike West (10 days ago)

On Wed, May 10, 2017 at 5:27 PM, Devdatta Akhawe dev.akhawe@gmail.com

wrote:

Hi

That allow-credentials works with null is actually pretty useful to and lets us more easily adopt CSP/Iframe sandbox. I am pretty strongly against adding a new limitation: first because backwards compatibility is important and second because I think we should do more to encourage use of sandbox. IMO, in terms of impact, trusting null is the same bug as reflecting the origin header in Access-control-allow-origin, which is also very common on the web.

You probably know this, but for clarity, the problem James is pointing out is that sandboxed frames all look the same. [https://evil.com](https://evil.com) in a sandbox sends the same Origin header as [https://yay.com](https://yay.com) in a sandbox. You might trust the latter, but not the former, but CORS gives you no mechanism of distinguishing between the two.

Since null is basically * for sandboxed frames, applying similar restrictions with regard to credentials seems like it might be a reasonable thing to do.

Is Dropbox making CORS requests from sandboxed frames that require credentials?

# Devdatta Akhawe (10 days ago)

Yes, we do rely on it right now. We rely on a form of CSRF tokens to protect the requests so that evil.com can't make the request; while any XSS on the page can't affect the main origin.

My point is that the vulnerability that null allows is the same in impact as the websites that just blindly reflect an origin. None of the proposals for that are talking about breaking existing apps and I think we should follow the same principle here.

# Anne van Kesteren (10 days ago)

On Wed, May 10, 2017 at 5:42 PM, Devdatta Akhawe dev.akhawe@gmail.com wrote:

Yes, we do rely on it right now. We rely on a form of CSRF tokens to protect the requests so that evil.com can't make the request; while any XSS on the page can't affect the main origin.

My point is that the vulnerability that null allows is the same in impact as the websites that just blindly reflect an origin. None of the proposals for that are talking about breaking existing apps and I think we should follow the same principle here.

Since breaking Dropbox doesn't really seem like an option, write a PR against Fetch to remove the issue marker? Not much point in having it there if it can't be implemented.

# Daniel Veditz (10 days ago)

On Tue, May 9, 2017 at 8:41 AM, James Kettle james.kettle@portswigger.net

wrote:

  • Enabling static trust of multiple origins by supporting a space-separated list of origins

​I would support syntax for this. Having to programmatically generate the correct header has always seemed error prone and leads to blind echoing which is less secure than just using "*".​

  • Enabling static trust of all subdomains by supporting the use of partial

    wildcards like *.example.com

​I have more trepidation here that people will leave themselves too wide open, but I cautiously support this as a practical necessity given the first. Would such sites allow credentials, or would they be forced to be credential-less like '*' ?

Websites accessed over HTTPS can use CORS to grant credentialed access to

HTTP origins, which partially nullifies their use of HTTPS. Perhaps browsers' mixed content protection should block such requests, or at least disable allow-credentials for HTTP->HTTPS requests.

I disagree. From the document and user point of view it's not at all mixed content.​ If the HTTPS server doesn't want to give up its data to HTTP origins it can quite simply not respond with the CORS headers that enable it. The existing mixed-content blocking behavior leans hard on 3rd-party services to implement https because as such they can serve anyone, while with https they cannot serve to https documents.

- ​Dan Veditz​

# James Kettle (8 days ago)

Thanks for all the feedback!

  • Enabling static trust of multiple origins by supporting a

    space-separated list of origins

​I would support syntax for this. Having to programmatically generate the correct header has always seemed error prone and leads to blind echoing which is less secure than just using "*".​

Great! I don't have any strong views on what the syntax should be (spaces vs commas), but I think this one suggestion is the clearest win as it strongly pushes people towards using a hard-coded whitelist.

  • Enabling static trust of all subdomains by supporting the use of partial

    wildcards like *.example.com

​I have more trepidation here that people will leave themselves too wide open, but I cautiously support this as a practical necessity given the first. Would such sites allow credentials, or would they be forced to be credential-less like '*' ?

I think this would need to support credentials for anyone to use it. I agree that trusting all subdomains isn't really a great idea, but it's a common use case and if you enforced a rule like '* must be followed by .' you could help out the many sites making Zomato's mistake of trusting literally everything that ends in zomato.com, including notzomato.com

Websites accessed over HTTPS can use CORS to grant credentialed access to

HTTP origins, which partially nullifies their use of HTTPS. Perhaps browsers' mixed content protection should block such requests, or at least disable allow-credentials for HTTP->HTTPS requests.

I disagree. From the document and user point of view it's not at all mixed

content.​

Maybe mixed content was a poor choice of terminology. I think this suggestion might have been misunderstood slightly. I'm suggesting that an application that specifies ACAO: true and ACAO: <some HTTP origin> should

have the ACAC flag ignored. I don't see how this will making upgrading sites to HTTPS harder, since as Anne said the standard approach is to upgrade CDNs first and the application afterward, and it's only applications that care about allowing credentials.

If the HTTPS server doesn't want to give up its data to HTTP origins it can quite simply not respond with the CORS headers that enable it.

Well obviously, but anyone who wants to trust all their subdomains has to do dynamic generation based on the Origin header, and virtually none of them bother to check the protocol being supplied, Google included. You could just as easily say "HTTPS sites that don't want to wreck their security shouldn't import scripts over HTTP" but browsers are happy to step in and block that.

At present, the browser UI on HTTPS pages that allow credentialed access from HTTP origins is grossly misleading. The browser indicates that the page is secured against network attackers when they can actually trivially gain access to everything the user can see.

# Daniel Veditz (8 days ago)

On Fri, May 12, 2017 at 10:20 AM, James Kettle <james.kettle@portswigger.net

wrote:

I think this would need to support credentials for anyone to use it. I agree that trusting all subdomains isn't really a great idea, but it's a common use case and if you enforced a rule like '* must be followed by .' you could help out the many sites making Zomato's mistake of trusting literally everything that ends in zomato.com, including notzomato.com

​Wildcards in CSP directives have this requirement (apart from​ standalone "*"). Completely reasonable.

Maybe mixed content was a poor choice of terminology. I think this

suggestion might have been misunderstood slightly. I'm suggesting that an application that specifies ACAO: true and ACAO: <some HTTP origin> should have the ACAC flag ignored. I don't see how this will making upgrading sites to HTTPS harder, since as Anne said the standard approach is to upgrade CDNs first and the application afterward, and it's only applications that care about allowing credentials.

​Does Google have any telemetry on how often http->https XHR/fetch

explicitly request credentials? Mozilla mixed-content telemetry ignores insecure documents so we don't have any.

- ​Dan VEditz​

# Evan J Johnson (8 days ago)

I wanted to bring up a problem that hasn't been touched on. (Hi James!) Having hunted down dozens of libraries that implement the CORS spec wrong, one problem I consistently see is people are misunderstanding "". I see, without fail, developers believing that "" means to reflect all origin headers. This leads to problems when ACAC is set to true. Here are two examples:

I am re-scanning the Alexa 1M for CORS misconfigurations and you can scan them yourself:ejcx/badcors-massscan

Among the sites with it misconfigured, bitbucket.org, theblaze.com, adidas.com, uverse.com, appnexus.com, etc, etc. (granted, not all are exploitable). In my opinion a separate header for "ACAO: *" should be created to keep this complexity away from people who need to include credentials and share data. The way I see it, the CORS situation won't improve until people can wrap their heads around it. Evan

# Evan J Johnson (8 days ago)

Sorry. I wanted to branch off the top comment. My mistake. Here is my post (tweaked and) re-sent.

Having hunted down dozens of libraries that implement the CORS spec wrong, one problem I consistently see is people are misunderstanding "*".

I see, without fail, developers believing that "*" means to reflect all origin headers. This leads to problems when ACAC is set to true. Here are two examples:

In my opinion a separate header for "ACAO: *" should be created to keep this complexity away from people who need to include credentials and share data.

I see current CORS problems as less of an implementation problem and more of a logic problem. People just don't understand "*" and explaining it is not easy.

Evan

# Ricardo Iramar dos Santos (7 days ago)

Nice post! I've started working with OWASP Secure Headers Project for the same reason. I saw a lot of websites defining the ACAO dynamically but most of them were protecting the resources with a CSRF token. One thing that I noticed when I was researching is that IE11 totally ignore CORS specification for local files (riramar/IE11xCORSxSOP) different from Chrome and Firefox. Microsoft said this is not a issue. The problem is that you can trick the user with XSS using a FileAPI ( msdn.microsoft.com/en-us/library/hh772331(v=vs.85).aspx) to save a html local file, open this file, do any request to any domain and send the content to a domain you control. Of course the user needs to click on Save, Open and Allow blocked content buttons but with a good phishing this is not a big deal. Another problem that I've noticed some times is about this point mentioned in the CORS specification (ref. www.w3.org/TR/cors/#security):

Because of this, resources for which simple requests have significance other than retrieval must protect themselves from Cross-Site Request Forgery (CSRF) by requiring the inclusion of an unguessable token in the explicitly provided content of the request.

CSRF is possible due a legacy cross-domain operations using POST forms for example (ref. en.wikipedia.org/wiki/Same-origin_policy#Corner_cases_and_exceptions). Not sure if the CORS simple request is more "open" than these legacy cross-domain operations but most of the developers/sysadmins don't know about this problem described on the CORS specification. SameSite ( tools.ietf.org/html/draft-west-first-party-cookies-07) is a good solution for CSRF but I didn't check how it's going to work with CORS in this case. I was trying to find an improvement for this protection in this case. Instead of using CSRF tokens let's say that the server can identify that the request is a CORS simple request so instead of processing the request and send the CORS headers back it could just send back a not allowed.

Thanks! Ricardo Iramar

On Tue, May 9, 2017 at 12:41 PM, James Kettle james.kettle@portswigger.net

wrote:

Want more features?

Request early access to our private beta of readable email premium.