Chromium‎ > ‎Chromium Security‎ > ‎

Security FAQ

Can you please un-hide old security bugs?

Our goal is to open security bugs to the public once the bug is fixed and the fix has been shipped to a majority of users. However, many vulnerabilities affect products besides Chromium, and we don’t want to put users of those products unnecessarily at risk by opening the bug before fixes for the other affected products have shipped. We must balance a commitment to openness with a commitment to avoiding unnecessary risk for users of widely-used open source libraries.

Therefore, we publicly open all security bugs within approximately 14 weeks of the fix landing in the Chromium repository. The exception to this is in the event of the bug reporter or some other responsible party explicitly requesting anonymity or protection against disclosing other particularly sensitive data included in the vulnerability report (e.g. username and password pairs).

Can I get advance notice about security bugs?

Vendors of products based on Chromium, distributors of Operating Systems that bundle Chromium, and individuals and organizations that significantly contribute to fixing security bugs can be added to a list for earlier access to these bugs. You can email us at security@chromium.org to request to join the list if you meet the above criteria. In particular, vendors of anti-malware, IDS/IPS, vulnerability risk assessment, and similar products or services do not meet this bar. 

Please note that the safest version of Chrome/Chromium is always the latest stable version — there is no good reason to wait to upgrade, so enterprise deployments should always track the latest stable release. When you do this, there is no need to further assess the risk of Chromium vulnerabilities: we strive to fix vulnerabilities quickly and release often.

Can I see these security bugs so that I can back-port the fixes to my downstream project?

Many developers of other projects use V8, Chromium, and sub-components of Chromium in their own projects. This is great! We are glad that Chromium and V8 suit your needs.
We want to open up fixed security bugs (as described in the previous answer), and will generally give downstream developers access sooner. However, please be aware that backporting security patches from recent versions to old versions cannot always work. (There are several reasons for this: The patch won't apply to old versions; the solution was to add or remove a feature or change an API; the issue may seem minor until it's too late; and so on.) We believe the latest stable versions of Chromium and V8 are the most stable and secure. We also believe that tracking the latest stable upstream is usually less work for greater benefit in the long run than backporting. We strongly recommend that you track the latest stable branches, and we support only the latest stable branch.

What are the security and privacy guarantees of Incognito mode?

The canonical answer to this question is here: https://support.google.com/chrome/bin/answer.py?hl=en&answer=95464&p=cpn_incognito.

In particular, please note that Incognito is not a “do not track” mode, and it does not hide aspects of your identity from web sites. Chrome does offer a Do Not Track option, distinct from Incognito mode: chrome://settings/search#privacy.

When in Incognito mode, Chrome on desktop platforms (Windows, Linux, and Mac OS X), Chrome on Android, and Chrome OS do not store any new history, cookies, or other state in non-volatile storage. However, Incognito windows will be able to access some previously-stored state, such as browsing history.

It is not possible for Chrome to offer the full Incognito guarantee for iOS. Some data is sometimes stored on non-volatile storage, and while Chrome for iOS makes a best-effort attempt to delete as much history as it can when in Incognito mode, the guarantee is not as strong as it is on other platforms. On Chrome for iOS, we therefore call it “Incognito*” mode.

Why can’t I select Proceed Anyway on some HTTPS error screens?

A key guarantee of HTTPS is that Chrome can be relatively certain that it is connecting to the true web server and not an impostor. Some sites request an even higher degree of protection for their users (i.e. you): they assert to Chrome (via Strict Transport Security — HSTS — or by other means) that any server authentication error should be fatal, and that Chrome must close the connection. If you encounter such a fatal error, it is likely that your network is under attack, or that there is a network misconfiguration that is indistinguishable from an attack.

The best thing you can do in this situation is to raise the issue to your network provider (or corporate IT department).

Chrome shows non-recoverable HTTPS errors only in cases where 
the true server has previously asked for this treatment, and when it can be relatively certain that the current server is not the true server.

Why aren't physically-local attacks in Chrome's threat model?

People sometimes report that they can compromise Chrome by installing a malicious DLL on a computer in a place where Chrome will find it and load it, or by hooking APIs (see https://code.google.com/p/chromium/issues/detail?id=130284 for an example).
We consider these attacks outside Chrome's threat model, because there is no way for Chrome (or any application) to defend against a malicious user who has managed to log into your computer as you, or who can run software with the privileges of your operating system user account. Such an attacker can modify executables and DLLs, change environment variables like PATH, change configuration files, read any data your user account owns, email it to themselves, and so on. Such an attacker has total control over your computer, and nothing Chrome can do would provide a serious guarantee of defense. This problem is not special to Chrome ­— all applications must trust the physically-local user.
There are a few things you can do to mitigate risks from people who have physical control over your computer, in certain circumstances.
  • To stop people from reading your data in cases of device theft or loss, use full disk encryption (FDE). FDE is a standard feature of most operating systems, including Windows Vista and later, Mac OS X Lion and later, and some distributions of Linux. (Some older versions of Mac OS X had partial disk encryption: they could encrypt the user’s home folder, which contains the bulk of a user’s sensitive data.) Some FDE systems allow you to use multiple sources of key material, such as the combination of both a password and a key file on a USB token. When available, you should use multiple sources of key material to achieve the strongest defense. Chrome OS encrypts users’ home directories.
  • If you share your computer with other people, take advantage of your operating system’s ability to manage multiple login accounts, and use a distinct account for each person. For guests, Chrome OS has a built-in Guest account for this purpose.
  • Take advantage of your operating system’s screen lock feature.
  • You can reduce the amount of information (including credentials like cookies and passwords) that Chrome will store locally by using Chrome's Content Settings (chrome://settings/content) and turning off the form auto-fill and password storage features (chrome://settings/search#password).
There is almost nothing you can do to mitigate risks when using a public computer.
  • Assume everything you do on a public computer will become, well, public. You have no control over the operating system or other software on the machine, and there is no reason to trust the integrity of it.
  • If you must use such a computer, consider using an incognito mode window, to avoid persisting credentials. This, however, provides no protection when the system is already compromised as above.

What about unmasking of passwords with the developer tools?

People sometimes report password disclosure using the Inspect Element feature (see https://code.google.com/p/chromium/issues/detail?id=126398 for an example), thinking that "If I can see the password, it must be a bug." However, this is just one of the physically-local attacks described in the previous section. So, all of those points apply here as well.

To better explain, the password is masked as part of the normal flow only to prevent disclosure via "shoulder-surfing" (i.e. the passive viewing of your screen by nearby persons), and not because it is a secret unknown to the browser. The browser knows the password at many layers, including JavaScript, developer tools, process memory, and so on. When you are physically local to the computer, and only when you are physically local to the computer, there are, and always will be, tools for extracting the password from any of these places.

Again, see the section on physically-local attacks for the proper ways to mitigate local risks.

Why isn't passive browser fingerprinting (including passive cookies) in Chrome's threat model?

As discussed in Chromium bug #49075, we currently do not attempt to defeat "passive fingerprinting" or "evercookies" or ETag cookies, because defeating such fingerprinting is likely not practical without fundamental changes to how the Web works. One needs roughly 33 bits of non-correlated, distinguishing information to have a good chance of telling apart most user agents on the planet (see Arvind Narayanan's site and Peter Eckersley's discussion of the information theory behind Panopticlick.) 

Although Chrome developers could try to reduce the fingerprintability of the browser by taking away (e.g.) JavaScript APIs, doing so would not achieve the security goal for a few reasons: (a) we could not likely get the distinguishability below 33 bits; (b) reducing fingerprintability requires breaking many (or even most) useful web features; and (c) so few people would tolerate the breakage that it would likely be easier to distinguish people who use the fingerprint-defense configuration. (See "Anonymity Loves Company: Usability and the Network Effect" by Dingledine and Mathewson for more information.)

There is a pretty good analysis of in-browser fingerprinting vectors on this wiki page. Browser vectors aside, it's likely that the browser could be accurately fingerprinted entirely passively, without access to JavaScript or other web features or APIs, by its network traffic profile alone. (See e.g. Silence on the Wire by Michal Zalewski generally.)

Since we don't believe it's feasible to provide some mode of Chrome that can truly prevent passive fingerprinting, we will mark all related bugs and feature requests as WontFix.

How does key pinning interact with local proxies and filters?

To enable certificate chain validation, Chrome has access to two stores of trust anchors: certificates that are empowered as issuers. One trust anchor store is the system or public trust anchor store, and the other other is the local or private trust anchor store. The public store is provided as part of the operating system, and intended to authenticate public internet servers. The private store contains certificates installed by the user or the administrator of the client machine. Private intranet servers should authenticate themselves with certificates issued by a private trust anchor.


Chrome’s key pinning feature is a strong form of web site authentication that requires a web server’s certificate chain not only to be valid and to chain to a known-good trust anchor, but also that at least one of the public keys in the certificate chain is known to be valid for the particular site the user is visiting. This is a good defense against the risk that any trust anchor can authenticate any web site, even if not intended by the site owner: if an otherwise-valid chain does not include a known pinned key (“pin”), Chrome will reject it because it was not issued in accordance with the site operator’s expectations.


Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. “Data loss prevention” appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning.


We deem this acceptable because the proxy or MITM can only be effective if the client machine has already been configured to trust the proxy’s issuing certificate — that is, the client is already under the control of the person who controls the proxy (e.g. the enterprise’s IT administrator). If the client does not trust the private trust anchor, the proxy’s attempt to mediate the connection will fail as it should.

Can I use EMET to help protect Chrome against attack on Microsoft Windows?

There are known compatibility problems between Microsoft's EMET anti-exploit toolkit and some versions of Chrome. These can prevent Chrome from running in some configurations. Moreover, the Chrome security team does not recommend the use of EMET with Chrome because its most important security benefits are redundant with or superseded by built-in attack mitigations within the browser. For users, the very marginal security benefit is not usually a good trade-off for the compatibility issues and performance degradation the toolkit can cause.

Why does the password manager ignore autocomplete='off' for password fields?

Ignoring autocomplete='off' for password fields allows the password manager to give more power to users to manage their credentials on websites. It is the security team's view that this is very important for user security by allowing users to have unique and more complex passwords for websites. As it was originally implemented, autocomplete='off' for password fields took control away from the user and gave control to the web site developer, which was also a violation of the priority of constituencies. For a longer discussion on this, see the mailing list announcement.

Why are some web platform features only available in HTTPS page-loads?

The full answer is here: we Prefer Secure Origins For Powerful New Features. In short, many web platform features give web origins access to sensitive new sources of infomation, or significant power over a user's experience with their computer/phone/watch/et c., or over their experience with it. We would therefore like to have some basis to believe the origin meets a minimum bar for security, that the sensitive information is transported over the Internet in an authetnicated and confidential way, and that users can make meaningful choices to trust or not trust a web origin.

Note that the reason we require secure origins for WebCrypto is slightly different: An application that uses WebCrypto is almost certainly using it to provide some kind of security guarantee (e.g. encrypted instant messages or email). However, unless the JavaScript was itself transported to the client securely, it cannot actually provide any guarantee. (After all, a MITM attacker could have modified the code, if it was not transported securely.)

Which origins are "secure"?

Secure origins are those that match at least one of the following (scheme, host, port) patterns:
  • (https, *, *)
  • (wss, *, *)
  • (*, localhost, *)
  • (*, 127/8, *)
  • (*, ::1/128, *)
  • (file, *, —)
  • (chrome-extension, *, —)
That is, secure origins are those that load resources either from the local machine (necessarily trusted) or over the network from a cryptographically-authenticated server.

What's the story with certificate revocation?

Chrome's primary mechanism for checking the revocation status of HTTPS certificates is CRLsets.

Chrome also supports Online Certificate Status Protocol (OCSP). However, the effectiveness of OCSP is is essentially 0 unless the client fails hard (refuses to connect) if it cannot get a live, valid OCSP response. No browser has OCSP set to hard-fail by default, for good reasons explained by Adam Langley (see https://www.imperialviolet.org/2014/04/29/revocationagain.html and https://www.imperialviolet.org/2014/04/19/revchecking.html).

Stapled OCSP with the Must Staple option (hard-fail if a valid OCSP response is not stapled to the certificate) is a much better solution to the revocation problem than non-stapled OCSP. CAs and browsers are working toward that solution (see the Internet-Draft: http://tools.ietf.org/html/draft-hallambaker-tlssecuritypolicy-03).

Additionally, non-stapled OCSP poses a privacy problem: in order to check the status of a certificate, the client must query an OCSP responder for the status of the certificate, thus exposing a user's HTTPS browsing history to the responder (a third party).

Chrome performs online checking for Extended Validation certificates if it does not already have a non-expired CRLSet entry covering the domain. If Chrome does not get a response, it simply downgrades the security indicator to Domain Validated.

See also this bug for more discussion of the user-facing UX: https://code.google.com/p/chromium/issues/detail?id=361820.