You are currently viewing a snapshot of taken on April 21, 2008. Most of this content is highly out of date (some pages haven't been updated since the project began in 1998) and exists for historical purposes only. If there are any pages on this archive site that you think should be added back to, please file a bug.

Mozilla Security Review and Best Practices Guide

Draft 3 - May 17, 2002

Some Key Points

  • Good security cannot be tacked on to a product after the fact, it must be designed in.
  • A bug anywhere in the product can create a security vulnerability. Security bugs don't necessarily occur in PSM or ScriptSecurityManager.
  • Writing secure code is an integral and necessary part of writing correct code. You wouldn't check in, or give reviewer's approval to code that contains memory leaks, references uninitialized variables, et cetera. Likewise, code that contains buffer overruns, cross-site scripting problems, or any of the mistakes described below, is unacceptable for check-in.
  • In short, security is everyone's responsibility.

The Problem

Goals: What Are We Protecting Against?

"Security" is a pretty vague term. So is "privacy." Here, more specifically, are the things we need to protect against as we write code for Mozilla:

  • Running Arbitrary Code / General File-System Access - the "holy grail" of security compromises. If an attacker can cause some code of the attacker's choosing to be written into a user's memory and executed, then the attacker owns the user's system and can read, modify, or delete any data at will, access other machines that the user has access to, impersonate the user online, and use the compromised machine as a launching point for further attacks. This is the most dangerous class of attack against our software. Note that running arbitrary code and having general read-write access to the user's file system are equivalent; one leads easily to the other.
  • Arbitrary Code Revisited - The attack described above doesn't require the presence of a buffer overflow or other subtle vulnerability. It could be as simple as convincing or tricking the user into explicitly downloading and running a Trojan horse program from the attacker's FTP site. We can't protect against all forms of possible user stupidity, but we can make sure that users understand when they have downloaded potentially dangerous code and who that code came from. We should be sure users know that a download has occurred, and give them enough information to decide whether to trust a given download.
  • Stealing Privileges - an attacker makes use of some privilege or credential of the user's - often without obtaining the credential itself. For example, suppose you're logged in to your banking site in one browser window, and happen to visit an attacker's page in a second window. If the attacker can submit a form in the bank window requesting an account transfer, then he has used your credentials at the bank to steal from you, without actually getting your password.
  • Limited Local File Access / Cache Population - Sometimes, an attacker can plant malicious code (primarily JavaScript) on a user's system without having general access to the file system. Sometimes, Mozilla does the attacker the favor of saving his scripts to disk in particular locations - the cookie file, the browser cache, and so on.
  • Information Leakage - Mozilla should not reveal the user's email address, real name, or any other personally identifiable information without the user's express consent.
  • Spoofing - If the attacker can create a page that looks exactly like your bank's login page, and convince you to enter your password there, then he has stolen your bank account. All the cryptography in the world doesn't prevent this attack - only careful UI design will make it more difficult.
  • Denial of Service - Crashing the browser, opening windows in an infinite loop, and sending large numbers of emails are all forms of denial-of-service attacks. While annoying, we don't consider these dangerous exploits if they don't cause any corruption of data or other permanent damage, or leave the system in a compromised state. THe best fix for these is simply not to visit the offending site again.

What are we not protecting against?

The most secure computer is one that's turned off, unplugged, and buried in concrete. Because security and functionality are always in competiion, we need to draw a line between what we protect against and what we don't. In particular:

  • Downloaded Native Code - We can and should protect users against anything they might encounter on a Web page, or in a mail message. We cannot protect them against executables they have downloaded and run. Native executables have no access control beyond what the operating system enforces - in general, a binary executable can do anything the user can do. Once an attacker has planted some native code on a user's machine, through a security hole or simply by asking a gullible user to download and run MyTrojan.exe, the attacker has won and we have lost. That's why we work so hard to prevent native code from being downloaded and run, unless the user has made an explicit and informed decision to do so.
    • Installer Files: Files delivered via XPInstall can do anything the user can do - we have no way of limiting this. The user must trust the originator of the install files before consenting to the install, because after that, all bets are off.
    • Plugins, likewise, once installed, can do just about anything.
  • Physical Access - If an attacker (or a nosy family member) is sitting at the keyboard of your machine, he or she can access anything and everything on your machine. Some protection can be gained by using an OS that requires logins, or by using disk- or file-level encryption software . None of these are responsibilities of the Mozilla project.

In short, our main goal is to prevent attacks encountered in the course of browsing and mail reading. There are other sources of attack, but they are not our major concern.


Mozilla's open source code gives us a distinct advantage in finding security problems - we place no limits on the number of people who can be looking through the code for problems. At the same time, we have some specific challenges to overcome:

  • Mozilla is embedded in many very different types of applications, and is distributed to many types of users. This means we can't rely on the presence of a knowledgeable IT staff to configure the browser and keep it up to date for every user. It also means that distributing patches to all users of Mozilla-based products may be slow and cumbersome.
  • Mozilla is a very complex application developed in a distributed fashion. Complexity is the enemy of security. It is in the complex interactions between modules, potentially written by different groups with little coordination, that security problems often appear. That's why each module must be designed to work correctly, even when given bad data by another module.
  • Mozilla's user interface is written in the same languages used to create Web content (XML, HTML, and JavaScript). This makes it easy to confuse untrusted Web content with trusted UI code.
  • Many of our potential users are inexperienced computer users, who do not understand the risks involved in using interactive Web content. This means we must rely on the user's judgement as little as possible. As Edward Felten says, given the choice between dancing pigs and security, users will choose dancing pigs every time.

The Solutions

What follows are guidelines for Mozilla coders, reviewers, and UI creators on how to design secure features, anticipate problems, and be on the lookout for common pitfalls.

The Golden Rule of Security: Don't Trust Any Source of Input

It sounds paranoid, but it works. Think about where your input comes from: data from the network, files on disk, user input, environment variables, the arguments to your function. If your input couuld possibly have come from any source outside your control, verify it to assure it's in the form you expect. Assume that all remote servers, configuration files, and command-line arguments were created by evil hackers intent on mischief. Make sure that no combination of inputs can cause your code to behave unexpectedly. Looking at your code this way will improve its reliability too - security and reliability are close cousins. Most of the points below are really instances of this basic rule: never assume that your input is safe without making sure.

Chrome JS

Writing chrome is very much like writing web pages, and raises similar security concerns. The stakes are higher for chrome, however, because chrome JS is considered part of the native browser code, and has no restrictions on what it can do.

The most important thing to remember is to treat all user input, and (more importantly) data from the Web, including URLs, as untrusted and potentially malicious. Wherever data from the Web is used in chrome, it must be filtered for potentially dangerous content.

  • Remember that anywhere we render HTML, XML, or XUL, or load a URL, JavaScript may be run. HTML and XML content can contain <SCRIPT> tags, and URLs in Mozilla can use the javascript: and data: schemes to cause scripts to run. On a web page displayed in the main browser window, this is harmless, since everything in the browser window's content area are treated as untrusted and run in a protective 'sandbox.' Not so for dialogs or other special windows. Does your dialog contain any data derived from the current Web page or mail message? Does it display links taken from the page? If so, you must either filter out Javascript or ensure that scripts will run in a safe environment. **examples**
  • The location or origin of a file determines its trustworthiness, not the language or format it's written in. Everything in the browser's chrome directory is treated like part of the application.. Any JavaScript contained in the chrome directory has full access to the native browser APIs through XPConnect and can do anything that compiled C++ code can do, including reading and writing arbitrary files. Script files on the user's local drive outside of the chrome directory do not, by default, have full access to the browser APIs - they are treated just like scripts from the Web. Finally, Web scripts are by default untrusted and are very limited in what they can do. This is true whether they are contained in HTML, XML, or XUL files. Simply being in a XUL file should not give a script any special privileges, since privileges are based on where the script comes from, not what type of file it comes in.
  • Avoid using eval() whenever possible. Also avoid passing a string as the first argument to setTimeout() and setInterval(), as this causes an eval(). Eval, besides being slow, provides a good avenue for inserting and running malicious code. There is usually an alternative. If you must use eval(), be sure to verify that the string being passed to it contains an expected value.
  • In chrome Javascript, be careful of calls into _content, because everything in _content is untrusted. Be careful what parameters you pass, and be wary of the results. Don't assume things in _content have the expected type.


    • Use ToString(obj) instead of obj.toString() if obj comes from content. The webpage may have redefined the toString function of obj.
    • Don't assume that an object from content is a string, even if you expect it to be.
    • If you call a function in content, remember that the web page can read the arguments you pass. This is rarely a problem, but be careful not to give any information to the page that you wouldn't want sent back over the net (the user's email address, for example).
  • If you write a string to a window (writing into either chrome or content using document.write, apending to innerHTML, or any other method), and the value of the string could have come from a web page, html-escape the string before writing it. This is a sample HTML-escape function you can use.
  • Any time you write a URL to a page or window as the target of a link, the location of an image or embed, etc., and the URL could have come from a web page, look at the URL and don't write it out if its protocol is javascript: or data:. Loading those types of URLs can cause a script to run. If you want to allow javascript: and data: URLs, you must ensure that any script that runs does not run with all the privileges of chrome scripts.

With javascript: URLs, be aware of both the return value (which is generally rendered as HTML/XML content) and of the side effects of the script execution.


C and C++, while versatile, make security mistakes very easy. Certain functions in particular are quite risky and should be avoided or used very carefully. This section describes common security pitfalls to avoid when writing C and C++ code. Remember, mistakes like these anywhere in Mozilla could create a security vulnerability.

Buffer Overruns

A buffer is any contiguous block of memory. A buffer overrun means writing more data into a buffer than it can hold - the additional data overwrites other values in memory adjacent to the buffer. Depending on what those values are, overwriting them can allow an attacker to change the way the program operates, or even to execute arbitrary commands of the attacker's choosing. C and C++ provide no built-in protection against this. (JavaScript does, by dynamically growing buffers as needed to accomodate additional data.) Here's a bare-bones example:

    void dangerousFunction(char* input)
char buf[100];

PL_strcpy(buf, input);

// Additional tasks...

This is a very dangerous situation. In general, wherever you see PL_strcpy (or the standard C library function strcpy) with no size checking, you should be very worried. In this example, buffer is stored on the program's stack. Also on the stack are any other local variables, arguments, and the return address to which the program execution will jump when this function is finished. If an attacker can pass more than 100 characters as the input to this function, the PL_strcpy function will fill buf and then overwrite other values on the stack, including the return address. If an attacker can figure out where the return address is located relative to buf, he can craft the input to cause PL_strcpy to set the return address to a value of his choosing. A common technique is to fill buf with assembly code that gives the attacker aditional access to the user's machine, then set the return address to the beginning of buf.

That's the most simple example - buffer overruns occur in many forms, some more subtle than this example. For one thing, overruns can occur on the heap as well as the stack. That is, storage space allocated with malloc or new can be overflowed as well, and adjacent data modified. Overruns on the heap are more difficult to exploit for malicious gain, since it is more difficult to predict the layout of data adjacent to the buffer. However, exploits are still possible.

The solution to most buffer overflow problems is to limit the amount of data that can be copied to the buffer. In the above example, the easiest way to do this is to replace PL_strcpy with PL_strncpy, the bounded version:

void saferFunction(char* input)
char buf[100];

PL_strncpy(buf, input, sizeof(buf) - 1);
buf[sizeof(buf) - 1] = '\0';

// Additional tasks...

Note that I could have used PL_strncpy(buf, input, 99), but then if someone were to change the size of buf, they would have to remember to change the PL_strncpy call as well. It's much safer to use sizeof(buf), which is calculated at compile time and so has no performance impact. The other thing to note is that I explicitly set the last byte of buf to be the null character. This is necessary becasue PL_strncpy (as well as the C-library version, strncpy) are not guaranteed to null-terminate the buffer.

Other functions besides {PL_}strcpy are at risk for buffer overrun problems, such as {PL_}strcat, the sprintf family of calls, scanf, and gets. A complete list can be found below **link**.

Format Bugs

Functions such as printf(), fprintf(), sprintf(), and snprintf() are known as format functions. These functions take a format string argument which can contain directives like '%s'. When these markers are encountered, the function inserts data into the resulting string based on the arguments following the format string. For example,

    printf("Today is the %ith day of %s", 5, "May");    

prints the string "Today is the 5th day of May" to the console. The danger of format functions comes when an attacker can influence the contents of the format string. This is because if a format string contains more '%' directives then there are additional arguments to the function, the function will begin to read function arguments or local variables from the stack and include them in the output string. This could allow an attacker to read data about the internal state of your function. Worse than that, the '%n' directive writes the number of bytes written to the output string into the corresponding argument, or, if there are more percent-directives than arguments, into other locations on the stack. This creates a condition much like a buffer overrun, in that it allows an attacker to modify a running program or even execute arbitrary code, such as by overwriting the return address.

** example **

The solution to these problems is not to let untrusted Web content (or the user, for that matter) specify the format string. Ideally, the format string should be hardcoded. For a simple printf call:


always do this instead:

    printf("%s", str);    

"Locking" the format string in this way eliminates the vulnerability. If you can't hard-code the format string, and the data it contains may have come from an untrusted source, filter it beforehand. For example, if you expect a format string that will include no more than three other strings using %s directives, you could validate the input like so:

    void buildString(char* formatIn, char* data1, char* data2, char* data3)
PRInt32 percentCount = 0;
for (PRInt32 j = 0; j < PR_strlen(formatIn); j++)
if (formatIn[j] == '%')
percentCount++; // Increment our count of % directives

if (formatIn[j+1] != 's')
// We found a format directive other than %s,
//so abort the function with an error

if (percentCount > 3)
// More than three % directives, so abort with an error

char buf[1000];
snprintf(buf, sizeof(buf) -1, formatIn, data1, data2, data3);
buf[sizeof(buf)-1] = '/0'

Before calling snprintf, we check to make sure that the format string contains only three %-directives, and only of the %s variety. Note that we use snprintf with a sizeof() limit instead of sprintf to avoid buffer overruns, and we explicitly null-terminate buf.

Dangerous Functions

Dangerous C++ Functions
Name Risk Level Problem Solution
Name Risk Level Problem Solution
gets Very High No bounds checking Do not use gets. Use fgets instead.
strcpy Very High No bounds checking strcpy is safe only if the source string is a constant and the destination is large enough to hold it. Otherwise, use strncpy.
sprintf Very High No bounds checking, format string attacks sprintf is very hard to use safely. Use snprintf instead.
scanf, sscanf High Possibly no bounds checking, format string attacks Make sure all %-directives match the corresponding argument types. Do not use '%s' directives with no bounds checking. Use '%xs' where x is the buffer size of the corresponding argument. Do not use untrusted, un-validated data in the format string.
strcat High No bounds checking If input sizes are not well-known and fixed, use strncat instead.
printf, fprintf, snprintf, vfprintf, vsprintf, syslog High format string attacks Do not use untrusted, un-validated data in the format string. If the format string can be influenced by Web content or user input, validate it for the proper number and type of %-directives before calling these functions. Make sure destination size arguments are correct.
strncpy, fgets, strncat Low May not null-terminate Always explicitly null-terminate the destination buffer. Make sure the size argument is correct. Be sure to leave room in the destination buffer to add the null character!

File Access Issues

Temp Files
Symlink attacks