On Jan 26, 2007, at 3:07 AM, Tom Samplonius wrote:
----- Janne Peltonen <janne.peltonen@xxxxxxxxxxx> wrote:
As a part of our clustering Cyrus system, we are considering using
replication to prevent a catastrophe in case the volume used by the
cluster gets corrupted. (We'll have n nodes each accessing the
same GFS,
and yes, it can be done, see previous threads on the subject.)
I really doubt this. Even if GFS works the way it says it does,
Cyrus does not expect to see other instances modifying the same
message, and does not lock against itself.
Yes it does. How else do you suppose two users reading the same
shared mailbox might work? They aren't all running through one imapd.
Now the workings of the replication code aren't completely clear
to me.
It can do things like collapse multiple mailbox changes into one
and so
on. But is it in some way dependent on there being just one cyrus-
master
controlled group of imapd processes to ensure that all changes to the
spool (and meta) get replicated? Or does the replication code
infer the
synchronization commands from changes it sees on the spool,
independent
of the ongoing imap connections? That is, do I have to have n replica
nodes, one for each cluster node? Or don't I?
The Cyrus master builds a replication log as changes are made by
imapd, pop3d, and lmtpd. The log contents are pushed to the
replica. The master and replica both have copies of all data,
within independent message stores.
Close. imapd, pop3d, lmtpd, and other processes write to the log.
The log is read by sync_client. This merely tells sync_client what
(probably) has changed. sync_client roll up certain log items, e.g.,
it may decide to compare a whole user's state rather than just
looking at multiple mailboxes. Once it decides what to compare, it
retrieves IMAP-like state information from sync_server (running on
the replica) and pushes those changes that are necessary.
For your situation, Janne, you might want to explore sharing the sync
directory. sync_client and sync_server have interlock code, tho I
haven't reviewed it for this specific scenario.
:wes
----
Cyrus Home Page: http://cyrusimap.web.cmu.edu/
Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twiki
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html