You are currently viewing a snapshot of taken on April 21, 2008. Most of this content is highly out of date (some pages haven't been updated since the project began in 1998) and exists for historical purposes only. If there are any pages on this archive site that you think should be added back to, please file a bug.

Component Security for Mozilla

Mitch Stoltz

Hackers point out that it is vendors, not they, who are responsible for the gaping holes that permeate so many products. With companies releasing software as fast as possible, proper security often gets lost in the rush toward store shelves. As complexity increases, the opportunity for vulnerability increases says Steven Foote, a senior vice president at the Hurwitz Group, which analyzes strategic business applications. -- U.S. News, Can hackers be stopped?

Mozilla will make increasing use of Internet technologies to implement the browser itself. This has many benefits for modularity, cross-platform development, and encouraging development by a wider range of people. However, it also makes the process of ensuring browser security more challenging because the lines distinguishing the trusted browser from the untrusted content it displays have become blurred.

Security Model

Mozilla should support the existing security model for JavaScript in web content (see JavaScript Security in Communicator 4.x), with the possible exception of signed scripts. Any new APIs accessible from web content using Java or JavaScript should be reviewed for security.

Unlike Communicator 4.x (or Mozilla classic), Mozilla makes heavy use of web-style programming to build the browser itself. This is accomplished by making powerful actions available to JavaScript, and means that the security model for Mozilla must grow to support two kinds of code: untrusted web content and trusted browser implementation code.

Eventually we will need a full capabilities system similar to what Java has reached in Java2. However, given the need to ship quickly, we should implement a simpler binary trust model. All code used to implement the browser should be given full privileges, while the privileges of any code from off the net should be limited as it was in 4.x.

I'm proposing the following limitations to the capabilities of web-based code:

  • No web-based XUL
  • No direct access to RDF
  • Chrome only runs from the local filesystem (i.e., no downloadable chrome, only installable chrome)
  • Limited access to XPConnect components--most components will not be accessible from web content, and those that are accessible must undergo security review
  • No access of web content to the surrounding chrome

These limitations serve to make the system simpler and more secure.


Distinguishing types of code

There are two types of code in our security model: web content and browser implementation code. How do we distinquish the two? The existing JavaScript codebase contains support for associating a principal with every script that is evaluated. Then during execution, a stack of principals corresponding to the stack frame of the executing JavaScript can be retrieved. Thus at any point security critical code can check to see if it has been called by script, and if so, whether that script was privileged or not.

Component Security Analysis


The DOM should implement the security model from 4.x (at least the unsigned part). Historically the DOM has been the area with the most public exploits, so security implementation here will need careful review.

There are several proposals for additions to the 4.x DOM security model. Researchers from Bell Labs have proposed several new features, most notably domain-specific policies. There's a similar proposal described and elaborated in Bugzilla bug 858.


Since XUL is used to implement chrome, it needs access to highly privileged services. All code in chrome should be trusted, which means that installing code into chrome is a privileged action. Note that skins don't contain any code and should be safe to install without any heavy safeguards.

XUL code can interact with web content. However, we must ensure that web content cannot interact with privileged XUL code. The web content must live in a sandbox that cannot be breached through prototype chains or by the JavaScript "caller" property.


Through the sidebar, RDF content can be retrieved directly from untrusted servers and aggregated with other RDF content. We'll need a filter for RDF that can remove potentially dangerous pieces like JavaScript event handlers and link cancellation.

Many RDF data sources reflect security assets. Most obvious is the filesystem, but others like the chrome registry have security implications as well. We should prohibit direct access to RDF from untrusted code.


XPConnect allows JavaScript (and soon Java) to get access to native XPCOM components that can perform privileged actions. John Bandhauer has implemented support for restricting which components are visible through XPConnect. Scripts running from web content should be limited to a small set of components, each of which is reviewed for security.

Netlib and Necko

Currently protocols like "chrome:" and "resource:" are implemented by pluggable protocol handlers. Any untrusted code will need to have limits on which protocols it can access. Historically, this has been a source of many exploits through protocols like "about:" and "javascript:".

APIs for Review


Not sure what's here.... joki will help.

Chrome registry API

This is an API that allows web pages to request adding XUL files to chrome.