Re: Cyrus vs Dovecot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mathieu Kretchner <mathieu.kretchner@xxxxxxxxxxxxxxx> wrote:
> Ian G Batten a écrit :>> We have mailboxes.db and the metapartitions on ZFS, along with the zone>> iteself.  The pool is drawn from space on four 10000rpm SAS drives>> internal to the machine:
To give (hopefully) comparable comparison:
We have our meta files and spool files also on ZFS, with mirrored pools:
# zpool status   pool: cyrus state: ONLINE scrub: resilver completed with 0 errors on Sun May 25 12:17:46 2008config:
        NAME                                       STATE     READ WRITE CKSUM        cyrus                                      ONLINE       0     0 0          mirror                                   ONLINE       0     0 0            c6t600D0230006B66680C50AB4F92F61000d0  ONLINE       0     0 0            c6t600D0230006C1C4C0C50BE4DFE511B00d0  ONLINE       0     0 0
errors: No known data errors
  pool: mail state: ONLINE scrub: resilver completed with 0 errors on Sun May 25 01:05:02 2008config:
        NAME                                       STATE     READ WRITE CKSUM        mail                                       ONLINE       0     0 0          mirror                                   ONLINE       0     0 0            c6t600D0230006B66680C50AB0F36ADF100d0  ONLINE       0     0 0            c6t600D0230006C1C4C0C50BE57396E9F00d0  ONLINE       0     0 0          mirror                                   ONLINE       0     0 0            c6t600D0230006B66680C50AB5675F91300d0  ONLINE       0     0 0            c6t600D0230006C1C4C0C50BE16FF1FE200d0  ONLINE       0     0 0
errors: No known data errors

"cyrus" is our log pool, "mail" our imap spool pool.

IO ist mostly write:
# zpool iostat mail 2               capacity     operations    bandwidthpool         used  avail   read  write   read  write----------  -----  -----  -----  -----  -----  -----mail        2.08T  6.02T    226    163  1.36M  1.67Mmail        2.08T  6.02T    358     10  1.35M  94.4Kmail        2.08T  6.02T    234    599  1.08M  10.0Mmail        2.08T  6.02T     77      0   425K  3.98Kmail        2.08T  6.02T     85    306   484K  3.39Mmail        2.08T  6.02T     95      8   405K  75.6Kmail        2.08T  6.02T    107      6   798K  47.8Kmail        2.08T  6.02T     73    232   281K  2.30Mmail        2.08T  6.02T     77      2   304K  9.95Kmail        2.08T  6.02T     66    469   254K  5.84Mmail        2.08T  6.02T     83      4   409K  17.9K

As with Ian's setup, most read requests are serviced from ARC.We have BOTH data (meta and spool) on this ZFS pool, however we defined an extra ZFS filesystem for metadata to make distinct snapshots.cyrus.header remains on the imap spool partition.

Raw Disk I/O is different as ZFS pulls out up to "recordsize" from disk per request (128k by default).
Load is 0.47 at the moment, 1355 imapd processes, 10 lmtpd processes (limited by delivering gateway), 34 pop3d processes.The machine is a two-processor Opteron (dualcore) machine, so 4 cores are available. It has 20 GB ram  and ARC (zfs) uses:
# kstat zfs:0:arcstats:sizemodule: zfs                             instance: 0name:   arcstats                        class:    misc        size                            9308832256
9 GB zfs file cache.
Hope this helps you a little bit.
Pascal
----Cyrus Home Page: http://cyrusimap.web.cmu.edu/Cyrus Wiki/FAQ: http://cyrusimap.web.cmu.edu/twikiList Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html


[Index of Archives]     [Cyrus SASL]     [Squirrel Mail]     [Asterisk PBX]     [Video For Linux]     [Photo]     [Yosemite News]     [gtk]     [KDE]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux