Transaction Manager Tech Spec
author: jgaunt date: Sept 25, 2002
Summary
The transaction manager is the core piece of the entire profile sharing architecture described in the Shared Profile Tech Spec. (profile sharing tech doc homepage) It will be in charge of communication between gecko processes. This document describes the architecture to be used for the manager itself.
Requirements
- enable multiple processes to notify each other
- serve as a messaging system between gecko processes
- allow processes to specify the particular types of messages they are interested in
- allow processes to register to get messages
- provide a way for 2 processes to communicate directly with each
other (match-maker service) ?
Design
- The transaction manager will be run out of process and communicate via an IPC mechanism. It will coordinate the messaging aspect of profile sharing, namely it will receive from a process a message describing a change to the profile data. That message will then be relayed to all processes who have registered as a listener for the profile subsystem that is being modified. Only a system interested in the specific change will be notified of that change. Processes registered to listen for preference changes will not get messages about cookie changes.
- It makes sense to separate the different levels of support into base classes to allow for reuse. In particular, a system that only wants to set up a pipe to a single system shouldn't incur the costs inherent in a broadcast model. A top level base class may only be able to mediate the exchange between 2 processes by setting up a channel between the two processes and then stepping out of the way and letting them talk directly to each other. A second level might hold handles to processes that have registered as a certain type of app ( i.e. mail, IM... ) and processes that need to talk to a specific type of app can get a connection made through this type of service. A third could implement the broadcast model described in this document. Yet another could be more specific to profile data and include some knowledge about the type of data being transferred. Perhaps by further granulating the type of changes in which a process is interested (if that makes sense) or controlling access to files and handing out file handles etc...
- Transaction Service Public Interface: - The important thing to remember is that this is a messaging service, like the post office (except without the losing or mangling of mail or long lines at lunchtime). It should have little (if any) knowledge of what it is passing back and forth. Now that I think about it more, the public API is really the format of the messages and the types of things you can register for. Things messages must be able to indicate:
- post a message to a particular queue
- see if a queue is in use (people are registered)
- get all messages for a queue
- indicate a process is starting up and it needs messages from a certain queue (needs to deny any posts to that queue until all current messages are delivered to new process
- indicate a proscess is shuting down and needs to update before writing to disk (same problem as above)
- send a message to a particular type of listener (in the case of wanting to communicate with the mail program etc)
- register as a particular type of listener (as in above, register
as a mail program)
- TM internal (private) methods:
- Init()
- AddTransactionListener(QueueID, PID(?), nsISupports*) - the queue to add the process to and a pointer back to the object that cares. (this is who will get called with the notification that a message is available)
- RemoveTransactionListener(QueueID, PID(?), nsISupports*)
- PostTransaction(TransType, Message)
- RecieveTransaction(TransType, Message)
- Shutdown()
- IsQueueEmpty()
- The TM will have to be able to monitor several different IPC connections at the same time and must be able to synchronize the messages received from different sources. An alternative could be to have all incoming traffic arrive on the same IPC connection, that way requests for info, or posts for changes will just be streamed in one at a time (effectively making the receive() method synchronized). Depending on the IPC mechanism used this may or may not be possilbe. We are looking into using a new IPC mechanism written by Netscapers for use with a new PSM module. We are not sure when that will land in Mozilla, or if it will meet our needs. jgaunt is going to look at the IPC code and rpotts is checking into the landing plans for the PSM module.
- Internal data structure - the TM will need to keep track of:
- what processes have started up and attached to the TM as possible listeners to things
- what processes are actually registered listeners
- what queue(s) each process is listeneing to
- what queues are in existence and how many processes attached to each
- particular process types ( mail, browser, aim, addressbook etc )
- IPC connections to each process
- messages in each queue
- For each process, a top of queue index for each queue it is registered for
- name? version? type (pref/cookie/etc)?
- Each message will need to keep track of ( or the queue will need to for each message )
- owning process
- message type (a pref queue can have addpref, removed pref, resetpref
... )
- message data
- ref count
Use Cases
These use cases are going to take into account a slightly wider range of topics than covered in the Shared Profile Tech Doc. They deal with the specific cases of the programmatic use of the TM. These use cases should flush out any issues or expose dangerous areas touched by the TM. Specifically we are paying attention to the interaction between processes, with startup and shutdown being the most dangerous times for data corruption.
- Startup - From the TM point of view, this step is very simple. When a process that is set up to use shared profile data begins, it will launch the TM. The TM will merely initialize itself and listen for a request from a process wanting to add itself to a transaction list.
- Shutdown - During the process of removing a process from the listener list, a check will be made to see if the process is going away and then a check needs to be made to see if there are any processes "attached" to the TM without being on a list. If there are no processes even "attached" then the TM will shutdown, cleaning up after itself, of course.
- Initialization of TM - Basically just setup of any data structures to be prepared to receive messages and then blocking (or something) on a socket or waiting for whatever IPC mechanism we go with.
- Adding message listeners - This will be initiated by a message via IPC. Enough info will need to be in the message to be able to return a message to the process interested, either a handle to the pipe/socket or perhaps a PID.
- Removing message listeners - Again, initiated by a message via IPC. The listener specified will be removed from the list of interested processes for a specified queue.
- receiving messages - if we can have all processes attach to a single pipe in to the TM, that would be ideal. This may be the final step of initialization, or the next step following initialization (it probably doesn't matter). When a message comes in, the type of message needs to be deteremined (add/rem listener, post message to queue, startup, shutdown, get messages from queueu... ). The type will be stripped from the message and the rest of the information will be passed along, intact (not parsed yet) to the corresponding helper function.
- if a message is posted to a list with only the owner subscribed, a refcount of -1 will be set, to indicate that the message has been posted and not flushed to disk yet, but no other processes were around when it was posted. Any processes that start up, or attach to the queue before the change has been written out to disk will pick up this message, but not delete it. Messages with refcount of -1 will be removed only once written to disk.
- If there is more than just the owner watching the queue the ref count will be set to the number of processes - 1 ( for the owner). When the ref count reaches 0 the process that set the ref count to 0 will be responsible for removing the transaction from the queue and adjusting any remaining transactions.
- broadcasting messages - when a message is posted to a queue all the listeners for that queue need to be notified. Using the IPC we will either send the message itself to each process, or send a notification that new messages are available and allow the process to fetch them at their leisure.
- clean queue - after a process has written its state to the disk, any lingering -1 processes need to be removed from the queue. The trigger from this may be a special "I'm going to write out" retrieve messages message, or it may be a seperate message itself.
- There needs to be a way for a process to get he notification of messages
and it will depend on the IPC mechanism. We may have to have a thread block
on the socket, or we may have to poll, or we may have to have a platform specific
way of doing it to avoid all of the above. Something that automagically just
posts events to the main gui thread is, I think, preferable. Again, this
all depends on the IPC system.
THIS DESIGN IS COPIED FROM THE OLD PREFERENCES TECH SPEC AND IS IN THE PROCESS OF BEING UPDATE -jgaunt 9/26
- A transaction queue will be used to notify other processes of changes in shared prefs.
- The new system will not use shared memory at all.
- the new system might use semaphores, but it remains to be seen if
they will be needed. The hope is that they will NOT be needed, but reality
may smack us down on that one.
- There are several patterns of access for the shared memory:
- Program startup sequence - for all processes starting up regardless of any existing processes
- Set[Int|Bool|Char|Complex]Pref
- Notification of Transaction in PTQ - when another process changes the shared memory
- Preference flushing - very similar to startup, key diffs are the transactions with -1 ref count and moving the write to below the transaction processing. We need to make sure our copy is up to date with all the transactions before we write it to the disk.
- Program shutdown - Update our state, write out our state. Need to make sure if we are deleting the shm/sem that there aren't any other processes coming up and holding on to them while we are going away.
- Runtime addition of shared data listener - for instance some piece of a program doesn't get created until a user needs it, but it uses a shared resource - WARNING, the *TQ may not be up and running already!!
- Runtime removal of shared data listener - a component of a program shuts down and no longer needs the transaction notifications. Does not bring it's current state up to date with what is in the transaction queue. Writes it's state out to disk JUST LIKE SHUTDOWN, a question remains in my mind about us writing to disk if we are just closing down a portion of the program that was interested in transactions. XXX discuss this
- Lock pref
- Unlock pref
- Clear User Pref
- Reset Branch
- Delete branch
- Summary chart X means we definitely do it, x means we might
if conditions are right.
Action
Lock File
GHS
MQLS
*TQS
Write Trans
Write SHM
Write File
Startup - init
X
X
X
Startup - pref loading
X
X
X
X
Shutdown
X
X
X
X
X
Set*Pref
X
X
X
Notifcation of Pref Change
X
X
Flush Preferences
X
X
X
X
Add Listener at Runtime
x
X
X
X
Remove Listener at Runtime
X
X
X
X
X
Lock Pref
X
X
X
Unlock Pref
X
X
X
Reset Branch
X
X
X
Delete Branch
X
X
X
Reset User Pref
X
X
X
Launch the Transaction Service(TS) {
Open and attach to the Global Header Semaphore(GHS)
Enter the GHS
Open and attach to the Global Header(GH)
Check for the address of the Master Queue List(MQL) in the GH
Open and attach to the MQL Semaphore(MQLS)
If the MQL address is not there {
Enter the MQLS
Create the MQL
}
Else{
Enter the MQLS
}
Open and attach to the MQL
Increment the count of attached processes in the MQL
Exit the MQLS
Exit the GHS
}
General instructions for components interested in shared data -- call the TS {
Enter the MQLS
Check the MQL for the name of the Transaction Queue(*TQ) desired.
Open and Attach to the proper *TQ Semaphore (*TQS)
If there is no entry for the *TQ
Enter the *TQS
Create the *TQ
Set up the *TQ with the proper header data
Else
Enter the *TQS
Open and attach to the *TQ
Add our PID to the list of listening processes
Adjust the MTOQI
If there are outstanding transactions
push them down modify any Top of Queue Indexes(TOQI) of existing PID blocks
Set our TOQI
Increment the count of processes attached
Increment the ref count of any outstanding transactions
Exit the *TQS
Exit the MQLS
During pref loading{
Acquire File Lock (prefs.js, user.js etc... )
Read the data from the file (the prefs)
Notify the TS that we are interested in shared prefs (executing the code above)
Enter the MQLS (shortcut this if queue is brand new)
Enter the *TQS for prefs (PTQS)
Retrieve all the outstanding Transactions
Decrement the ref count for each transaction
If the ref count on any transaction drops to 0
Remove the transaction
Shuffle any remaining transactions upward
modify and TOQIs that pointed below the old transaction
Exit the PTQS
Exit the MQLS
Release File Lock
}
Change local copy of the pref
Create a transaction
Enter PTQS
Place Transaction in the preference queue
Exit PTQS
continue
Enter PTQS
Get any transactions below our TOQI
Decrement the ref count for each
If the ref count drops to 0
Remove the transaction
Shuffle any remaining transactions upward
Modify any TOQIs that pointed below the old transaction
Exit PTQS
Acquire File Lock (prefs.js, user.js etc...)
Enter the PTQS
Retrieve all the outstanding Transactions
Decrement the ref count for each transaction
If the ref count on any transaction drops to 0 OR equals -1
Remove the transaction
Shuffle any remaining transactions upward
modify and TOQIs that pointed below the old transaction
Exit the PTQS
Write the data to the file (the prefs)
Release File Lock
Acquire File Lock (prefs.js, user.js etc...)
Enter the MQLS
Enter the PTQS
Retrieve all the outstanding Transactions
Decrement the ref count for each transaction
If the ref count on any transaction drops to 0 OR equals -1 {
Remove the transaction
Shuffle any remaining transactions upward
modify and TOQIs that pointed below the old transaction
}
Decrement (and capture) the process count
Remove our PID from the PTQ
Adjust the MTOQI
Detach and close the PTQ for our process
If the process count dropped to 0
Delete/Destroy the PTQ from the system
Exit the PTQS
Drop the PTQS
If we destroyed the PTQ
Destroy the PTQS
Write the data to the file (prefs.js, user.js...)
Exit the MQLS
Release File Lock
If TS has not been started already
Start TS
Enter the MQLS
Check the MQL for the name of the Transaction Queue(*TQ) desired.
Open and Attach to the proper *TQ Semaphore (*TQS)
If there is no entry for the *TQ {
Enter the *TQS
Create and attach to the *TQ
Set up the *TQ with the proper header data
}
Else {
Enter the *TQS
Open and attach to the *TQ
}
Add ourself to the *TQ {
Add our PID to the list of listening processes
increment the count
Adjust the MTOQI
If there are outstanding transactions {
push them down
Increment their ref count
modify any Top of Queue Indexes(TOQI) of existing PID blocks
}
Set our TOQI
}
Retrieve all the outstanding Transactions
Decrement the ref count for each transaction
If the ref count on any transaction drops to 0 ( should not be any right? )
Remove the transaction
Shuffle any remaining transactions upward
modify any TOQIs that pointed below the old transaction
Exit the *TQS
Exit the MQLS
Acquire File Lock
Enter the MQLS
Enter the PTQS
Retrieve all the outstanding Transactions
Decrement the ref count for each transaction
If the ref count on any transaction drops to 0 OR equals -1 {
Remove the transaction
Shuffle any remaining transactions upward
modify and TOQIs that pointed below the old transaction
}
Decrement (and capture) the process count
Remove our PID from the PTQ
Adjust the MTOQI
Detach and close the PTQ for our process
If the process count dropped to 0
Delete/Destroy the PTQ from the system
Exit the PTQS
Drop the PTQS
If we destroyed the PTQ
Destroy the PTQS
Write the data to the file (prefs.js, user.js...)
Exit the MQLS
Release File Lock
Change local copy of the pref
Create a transaction
Enter the PTQS
Place Transaction in the preference queue
Exit the PTQS
Change local copy of the pref
Create a transaction
Enter the PTQS
Place Transaction in the preference queue
Exit the PTQS
Change local copy of the pref
Create a transaction
Enter the PTQS
Place Transaction in the preference queue
Exit the PTQS
Change local copy of the pref
Create a transaction
Enter the PTQS
Place Transaction in the preference queue
Exit the PTQS
Change local copy of the pref
Create a transaction
Enter the PTQS
Place Transaction in the preference queue
Exit the PTQS
continue