Re: High availability email server...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Scott!

Your statements cannot be correct by logical reasons.

While on file locking level you are fully right, cyrus heavily depends on critical database access where you need application level database locking.

As only one master process can lock the database, a second one either cannot lock the database or just crashes it with simultaneous write access. I didn't try it by myself for obvious reasons...

If that didn't occur to you, then you had incredible luck, that there was no situation where both processes wanted to change the same db file simultaneously.

Best,
Daniel

Scott Adkins schrieb:
Okay, okay, I just can't *NOT* say something here :)

First, I disagree with all the statements below.  Cyrus CAN run in an
Active/Active mode, you CAN have multiple servers reading and writing to
the same files, and clustering IS a good way to achieve HA/DR/BC in a
Cyrus environment.  Why do I say that?  I say that because we have been
doing that for many many years.

The key is to have a GOOD clustering file technology that does proper
file locking while still providing good performance.  For years, we have
been running our Cyrus IMAP system on a Tru64 Alpha TruCluster system.
Our cluster has 4 members in it, 2 of which serves up Cyrus IMAP/IMSP,
and the other 2 which accepts and delivers mail via Sendmail.  All of
the servers have access to the full set of files across the cluster and
everything "just works".

I would like to address a few specific comments listed below:

 > GFS, Lustre, and other cluster filesystems do file-level locking; in
 > order to properly read and write to the BDB backend, you'd need DB-level
 > locking, which is not possible from a filesystem.

I wrote a lengthy response to this, but when I got the conclusion, it all
came down to a really simple point.  How is having multiple servers any
different than a single server?  You still have tons of different processes
all trying to acquire read/write locks to the same files.  There is no one
process in Cyrus that opens the database and shares it with all the other
processes running under Cyrus.  How is this different from an IMAP process
running on one server and a different IMAP process running on a different
server?  There isn't.  The end result is that file locking is the most
important feature Cyrus has to rely upon... what if you are using the flat
file format for your mailboxes.db file?  At that point, that is the ONLY
thing you can rely upon...

 > IMAP is also a stateful connection; depending on how you set up your
 > cluster, some clients might not handle it gracefully (e.g., Pine).

True, true... stateful it is... but at the same time, what kind of problems
do you see?  When an IMAP connection is opened, the client auths and starts
doing stuff with it.  When it closes the connection, it is done with it.
That is it.  If another connection is opened, presumably, it is because of
a user initiating a new action, and the client will simply do the same thing
again... auth and work.  Most clients keep at least one connection open to
the server at all times.  Even if the client has more than one connection,
and one connection is on one server and another connection is on another
server, there still shouldn't be any problems.... The data is the same on
the other end.  Incidentally, we have Pine users in our environment that do
not have problems with our multi-server clustering Cyrus environment.  In
fact, we have not seen any client have problems with it.

Webmail based clients are a different animal.  It isn't because of the fact
that we are running multi-servers in the environment, it is becasue of the
non-stateful nature of the client.  Users don't have problems with antyhing
from a data consistency standpoint, it is simply a problem with performance.
It is the same issue faced in a single server environment.  Using some kind
of middleware piece to cache IMAP connections is usually how this problem is
solved.

 > As already said in this thread: Cyrus cannot share its spool.
 > No 2 cyrus instances can use the same spool, databases and lockfiles.

Just simply isn't true.  However, I must say, there is a reason why NFS
shouldn't be used... it doesn't do proper file locking (though, I am going
to watch for the responses on the NFSv4 thread that somebody asked about).
Without proper file locking, even a single Cyrus server on the backend is
jeapordized by multiple IMAP processes wanting to write to a single DB at
the same time.

 > Clustered filesystems don't make any sense for Cyrus, since the
 > application itself doesn't allow simultaneous read/write.

I completely disagree... Clustering filesystems (if they implement proper
file locking techniques) actually SIMPLIFIES your setup significantly.  You
don't have to have a complex Murder/Perdition environment with replication,
failover, etc.  You simply run 2 or more servers running on the clustering
filesystem and run things as you would normally expect.  Surprisingly, it
runs quite well.

And finally:

 > Anyway, it has nothing to do with Cyrus, but if anyone does have
 > another application that wants lots of small files on a clustered FS:
 >
 > <http://web.caspur.it/Files/2005/01/10/1105354214692.pdf>
 > <http://polyserve.com/pdf/Caspur_CS.pdf>

Kinda surprising, but it DOES have something to do with Cyrus.  Caspur
did their case study on cluster filesystems with their e-mail environment.
It used Cyrus IMAP and some kind of SMTP (I think it was Postfix or
something like that), since that was the e-mail environment they had.
Polyserve came out as the big winner.  It is very good reading.  It is a
good case where a clustering filesystem was specifically chosen to handle
their Cyrus e-mail environment.

Incidentally, for those who care, we are planning on migrating our own
Cyrus environment out of Tru64 into RedHat running on Polyserve by the end
of the year (hopefully).

Scott

--On Monday, July 31, 2006 8:05 AM -0500 "Chris St. Pierre" <stpierre@xxxxxxxxxxxxxxxx> wrote:

One of the major problems you'd run into is /var/lib/imap, the config
directory.  It contains, among other things, a Berkeley DB of
information about the mail store.  GFS, Lustre, and other cluster
filesystems do file-level locking; in order to properly read and write
to the BDB backend, you'd need DB-level locking, which is not possible
from a filesystem.  If you tried putting /var/lib/imap on shared
storage, you'd have data corruption and loss in no time.

IMAP is also a stateful connection; depending on how you set up your
cluster, some clients might not handle it gracefully (e.g., Pine).


--On Saturday, July 29, 2006 12:40 PM +0200 Daniel Eckl <deckl@xxxxxxxx> wrote:

As already said in this thread: Cyrus cannot share its spool.
No 2 cyrus instances can use the same spool, databases and lockfiles.

For load balancing you can use a murder setup and for HA you can use
replication.


--On Friday, July 28, 2006 3:52 PM -0500 Rich Graves <rgraves@xxxxxxxxxxxx> wrote:

Clustered filesystems don't make any sense for Cyrus, since the application itself doesn't allow simultaneous read/write. Just use a normal journaling filesystem and fail over by mounting the FS on the backup server. Consider
replication such as DRDB or proprietary SAN replication if you feel you
must physically mirror the storage.

Anyway, it has nothing to do with Cyrus, but if anyone does have another
application that wants lots of small files on a clustered FS:

<http://web.caspur.it/Files/2005/01/10/1105354214692.pdf>
<http://polyserve.com/pdf/Caspur_CS.pdf>



------------------------------------------------------------------------

----
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html
----
Cyrus Home Page: http://asg.web.cmu.edu/cyrus
Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu
List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html

[Index of Archives]     [Cyrus SASL]     [Squirrel Mail]     [Asterisk PBX]     [Video For Linux]     [Photo]     [Yosemite News]     [gtk]     [KDE]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux