You are currently viewing a snapshot of taken on April 21, 2008. Most of this content is highly out of date (some pages haven't been updated since the project began in 1998) and exists for historical purposes only. If there are any pages on this archive site that you think should be added back to, please file a bug.

How to Measure and Analyze Startup Performance

Suresh Duddi <>
Cathleen Wang < >
Simon Fraser < >

Performance timeline

Timeline is a linear growing timestamp on when what is happening starting from the launch. It also enables code that can be used to time individual functions or bodies of code.
  1. Build setting
    Windows:   set MOZ_TIMELINE=1 [default for DEBUG builds]
    Unix:   ac_add_options --enable-timeline
    Mac:   put 'options timeline 1' in your build prefs file
  2. Rebuild
  3. Runtime setting: to turn ON timeline data output
    Windows:   set NS_TIMELINE_ENABLE=1
    Unix:   setenv NS_TIMELINE_ENABLE 1
    Mac:   set environment variables as described here
  4. Runtime setting: to log all timeline output to file instead of console
    Windows:   set NS_TIMELINE_LOG_FILE=filename
    Unix:   setenv NS_TIMELINE_LOG_FILE=filename
    Mac:   set environment variables as described here
This enables all performance timeline and code that can be used to measure timing in your build. To measure how much time is spent in a function you could do:
#include "nsITimelineService.h"

nsresult myfunc(char * args)
    NS_TIMELINE_START_TIMER("myfunc timer")

    ... body of function or code to be timed

    NS_TIMELINE_STOP_TIMER("myfunc timer");

    // Print timing results
    NS_TIMELINE_MARK_TIMER("myfunc timer");

    return NS_OK;
This will print to console or file selected via NS_TIMELINE_LOG_FILE. You should have enabled timeline in your build to have these do anything. On release builds these defines will be nooped. So you could chceckin these too into the tree.

If the function is called multiple times and only the cumulative time is interesting, move NS_TIMELINE_MARK_TIMER("myfunctimer") to outside the function to a suitable point. These timers are stored by a nsITimerService and are accessible from anywhere in the code until XPCOM shutdown.

It is highly recommended for all developers to enable timeline. These defines do the right thing on all platforms - take care of using high res timers, prevent getting caught by clock skews. So use these rather than rolling your own.

Startup Performance Measurement

Always follow these precautions when doing startup measurement
  1. Always use either an optimized or MOZ_PROFILE build to measure startup
  2. Always use the "Default User" profile without any changes made to it
  3. Dont let any print go to console. Always redirect output to to a file
  4. Ignore the timing on the first run
  5. Average the timing for the next 3 run in quick succession

Method 1 : Recommended for developers

If you enabled timeline, then
  1. Create quit.html as
    <body onLoad="window.close();">
  2. Create a default profile say "Default User"
  3. ./mozilla -P 'Default User' file:///c:/tmp/quit.html > out 2>&1 (windows)
    ./mozilla -P "Default User" file:///usr/tmp/quit.html >& out (unix csh/tcsh)
  4. Look for timing of main1 in the output. That is roughtly the startuptime. Ignore first run and take average of 3 runs.
Here is a simple script that does the all the above:
cvs co mozilla/tools/performance/startup
cd mozilla/tools/performance/startup
perl ../../../dist/bin/mozilla

Method 2 : QA and tinderbox do this

  1. cvs co mozilla/tools/performance/startup
  2. cd mozilla/tools/performance/startup
  3. ../../../dist/bin/mozilla -P "Default User" (unix/win)
    ??? (mac)
This will start browser with timing in content area. This is what the tinderboxes do.

Startup Performance Analysis

You could use many tools to analyze what is happening in startup.
  • Windows - Quantify, VTune
  • Unix - Jprof, trace-malloc
  • Mac - ???
Refer for more information.

Last Modified: Oct 12 2001