On Mon, October 3, 2011 11:13 pm, Pascal Gienger wrote: > Le 03/10/2011 23:09, Vincent Fox a écrit : > >> On 10/03/2011 12:58 PM, Josef Karliak wrote: >> >>> Hi there, >>> what filesystem type do you use for Cyrus imapd ? I use SLES11x64 (or >>> opensuse 11.4). I use Reiserfs3.6, so far so good. But couldn't be better >>> ? :) >>> >> >> ZFS, which unfortunately is not much of an option for >> you Linux folks I think. ZFS works great with thousands of users, no worries >> about getting "inodes" or partitions right and snapshots make keeping weeks >> of recovery points online in the pool trivial and cheap > > I second this. > Roughly 51,000,000 files on one (mirrored) multipathed FiberChannel SAN > volume with no performance bottlenecks. 64 GB RAM per node, approx 40 GB ARC > (ZFS Cache). Solaris 10u9 Kernel 147441-03 64bit x64 I third this. 45 million messages (down from 49M+ after summer cleanup) in nine filesystems spread over four ZFS pools, attached to a single Solaris 10 server (dual quad core Intel 2.8 GHz, although psrinfo reports 16 virtual processors, 72 GB RAM). We keep in-spool daily snapshots for 120 days (adds roughly 50% used space). One small remark : on occasions we have run in what apparently looks like a ZFS bug and have been discussing this in private communication with a Oracle/ Sun performance specialist. Details for Pascal and Vincent : at times (moderate to heavy I/O load) our filesystem perfomance drops down to a very low level. Immediate relief can be had by deleting a few ZFS snapshots and/or breaking one of more of the mirrors. Sounds, smells and feels like an allocation map issue, which is hinted at in posts on a couple of OpenSolaris discussion forums. Eric Luyten, Computing Centre VUB/ULB. ---- Cyrus Home Page: http://www.cyrusimap.org/ List Archives/Info: http://lists.andrew.cmu.edu/pipermail/info-cyrus/