Re: Re: E-Mail Cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Jan-Frode Myklebust wrote:
On 2006-08-02, Nicholas Anderson <nicholas@xxxxxxxxxx> wrote:
I'm searching in google how to convert from mbox to maildir using sendmail/procmail ....

At my previous job we changed from exim/uw-imap on mbox, to exim/docevot on maildir a couple of years ago. Didn't use
a cluster-fs, only SCSI-based disk failover. For about 500-users.

Right now I'm setting up a similar solution to your... trying
to support up to 200.000 users on a 5 node cluster, using IBM GPFS.

If sendmail is using procmail to do final mailbox-delivery, I think the configuration change should be primarily putting a '/' at the end of the path, as that should instruct procmail to
do maildir-style delivery. At least that's how I've been doing
it in my ~/.procmailrc. Ref. 'man procmailrc'.

i have 3000+ users and something like 70GB of emails and I'll have to test it very well before doing in the production server ....

Sure..  There are a few mbox2maildir converters.. You should probably
try a few of them and verify that they all give the same result.

Another thing to check is that your cluster-fs handles your load
well. My main consern would be how well GFS performs on maildir-style folders, as most cluster-fs's I've seen are optimized for large file streaming I/O. If possible, try to keep a lot of file-metadata in cache so that you don't have to go to disk every
time someone check their maildir for new messages.


We are running 700 000 users on a 2.5 GFS, 4 nodes, with POP, IMAP (direct access and SquirrelmMail) and SMTP. To make things worse, we use NFS between our GFS nodes and our mail servers.

We initially had huge performance problems in our setup, which I wrote in this message:
http://www.redhat.com/archives/linux-cluster/2006-July/msg00136.html

We ended up bumping the spindle count from 36 to 60 and then to 114, without it making a noticeable difference.

Our main killer was Squirrelmail over IMAP (the solution is primarily a webmail-based one)
Our performance problems were solved by the following:
- removing the folder-size plugin (built-in) and mail quota plugin (3rd party) reduced the traffic between IMAP servers and storage backend by 40%. - Implement imap proxy (www.imapproxy.org). This is giving us a 1 to 14 hit ratio. This storage which could not keep up previously, is now humming along fine.

Our initial mistake was to try and optimise on the FS layer (there werent any real performance optimizations in our setup to be made) and throw hardware at the problem, instead of suspecting and optimizing our application. Despite GFS not being designed for lots of small files, and not recommended for use with NFS, with the above changes, it performs more than adequately. We hope to see another performance gain once we get rid of the NFS and have our mail servers access the GFS directly.

Riaan
begin:vcard
fn:Riaan van Niekerk
n:van Niekerk;Riaan
org:Obsidian Systems;Obsidian Red Hat Consulting
email;internet:riaan@xxxxxxxxxxxxxx
title:Systems Architect
tel;work:+27 11 792 6500
tel;fax:+27 11 792 6522
tel;cell:+27 82 921 8768
x-mozilla-html:FALSE
url:http://www.obsidian.co.za
version:2.1
end:vcard

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux