You are currently viewing a snapshot of www.mozilla.org taken on April 21, 2008. Most of this content is highly out of date (some pages haven't been updated since the project began in 1998) and exists for historical purposes only. If there are any pages on this archive site that you think should be added back to www.mozilla.org, please file a bug.



The Netlib Microkernel Architecture

This document is a collection of ideas about a new direction for breaking up netlib and allowing protocols to be built and plugged in with few requirements on the protocol writer.

The role of the new netlib core will be more clearly defined than in the past:
1. Provide registration point for protocol implementations
2. Provide entry point for running commands
3. Dispatch commands to appropriate registered protocol
4. Create transport mechanisms for protocols upon request
5. Abstract transport i/o into a stream interface
6. Timeslice i/o without interfering with or requiring work from UI thread

Protocol implementations are now removed from a tight binding with the core netlib.  Protocol implementations will be able to communicate with netlib through abstract stream interfaces

Netlib becomes a service provider for an application that wants to build on top of it.  Applications must provide their own protocol implementations which can be plugged into netlib.  Applications run network operations through netlib's command entry point (most likely through URLs).  Netlib will drive the protocols in response to those commands.

Protocol Managers and Protocol Connections

Core netlib will provide a way for an application to register a protocol implementation.  This registration point should map a protocol scheme ("http:", "ftp:", etc.) to a Protocol Manager for a given protocol.  The main role of the Protocol Manager will be to create instances of Protocol Connections.  When a command is run through the netlib entry point, netlib will dispatch that command to the appropriate protocol manager for the given scheme.  In the simplest case (maybe we can provide a default implementation of this), the protocol manager then asks netlib to create a "Transport" (nsITransport?) which it will hand off to a new instance of a protocol-specific Protocol Connection.  The nsITransport basically is a container for an input and output stream, and possibly a control interface for handling lower-level control issues between a protocol and the core netlib.  New Protocol Connections must be bound to a nsITransport that netlib has provided, since netlib will drive the protocol connection through the nsITransport interface.  A more complex Protocol Manager (for a protocol such as IMAP, for instance, which imposes restrictions on connections, or in which connections must be maintained between commands) can cache nsITransports after a given Protocol Connection has finished with it.  In this sense, the socket connection would remain open and idle until the next time an IMAP command is run, in which case the Protocol Manager would NOT request a new nsITransport from the core netlib;  it would simply reuse one from its protocol-specific connection cache.  The Protocol Manager may take on additional responsibilities as well, depending on the protocol, such as queuing commands.

Protocol Connections, as stated previously, are created for each instance of an open, active connection of a given protocol.  Protocol Connections maintain connection-specific state, which should probably be represented as its own contained object so that it can be registered with the Protocol Manager if the Protocol Connection's nsITransport is later cached with the Protocol Manager.  Protocol Connections also contain the implementation of the protocol-specific parser, which reads and writes to/from the stream given in the nsITransport.  Protocol Connections are the objects which communicate with the application when protocol-specific events occur, such as new data is available for an application-specific consumer.  (See below for more on this.)
 

Threads

The only part of netlib running on its own thread will be the timeslicing mechanism.  This will cycle through open, active Transports and read/write data between their corresponding i/o mechanisms (the socket, the file, the memory buffer, etc.) and their streams, notifying their stream listeners when data is available.  This should be the only activity occurring in the netlib thread.  Protocol registration, command entry point and dispatching, protocol managers and protocol connections, should all run on the application thread (or a different thread).  The main role of the netlib thread is simply to isolate the process of timeslicing i/o from the application's main event loop, so that applications can be built without explicit knowledge of how to drive netlib.

To communicate across the netlib thread into the application thread when data is available on the socket, a proxy stream listener class will be used which will encapsulate all threading issues.  In this case, neither the Tranport implementation nor the protocol implementation (very important) need to know that they are communicating with another thread.  A proxy stream listener class is already built and running in the Seamonkey tree;  this is what is currently used to communicate from HTTP (which right now is in the netlib thread) to raptor.  This new design will be moving this proxy down one level, under the protocol implementation.  Work will have to be done to allow protocol connections to write back across the thread the other way.
 

Communication with the App from the Protocol Connections

Protocol Connections communicate with the application through what we have been referring to as "event sinks," or interfaces encapsulating protocol-specific events which can occur on a connection and which would be of interest to the application.  Something similar is already used today in the sense that a protocol such as HTTP can stream out data to raptor using an nsIStreamListener interface to notify raptor that there is new stream data.  By making this mechanism more generic, it is not required that this consumer be a nsIStreamListener;  it can be an object which may implement other protocol-specific interfaces which can be queried by the protocol connection, and those methods can be called to alert the application of other events.  Some examples of these events are other forms of out-of-band data, such as a new folder was discovered, or a message was deleted -- things that would have a hard time being represented as streams.

The Socket Stub

From the perspective of the mail/news team, we were looking for a way to plug our protocols into netlib and be able to communicate between the protocols and our application at runtime.  By laying the framework first with a prototype, we are hoping to be able to split up from the core netlib group and have the core group write the production-quality code that will be behind the scenes from our perspective.  We were hoping to be able to start getting our protocols migrated over from the 4.5 world into the 5.0 world in parallel to this, and for this we needed something up and running in the core.  The socket stub was a quick implementation of something that might be able to provide basic testability for us, so that we can be working in parallel while the core netlib group fleshes out the internals.  The socket stub simply uses the current MWContext/ActiveEntry design to timeslice between open socket connections.  In essence, it is a stub protocol that simply maintains an open socket connection and flushes data between the socket and the given streams.  By virtue of the fact that it uses the current MWContext/ActiveEntry design, these connections can be intermingled with HTTP and other protocols which are still in the old (?) architecture, and provide a migration path for new protocols to get going.  The socket stub was not written to be production-quality, but rather to provide something behind the interfaces that can allow the mail/news team to get going.  However, in theory it might turn out to be the case that an implementation similar to the socket stub could be used permanently, or even temporarily, and replaced in a later release.