Michael Loftis wrote: > > > --On April 19, 2006 12:47:26 PM -0500 John Wade <jwade@xxxxxxxxxx> wrote: > >> We run cyrus on RedHat on a CX600. For our administrative users, we put >> the whole thing on a nine disk fibre channel RAIDgroup, split into 6 >> luns, 5 for mail spool partitions and 1 for all the other metadata. >> Total mail spool size is 300GB. We average about 300 concurrent >> users. The CX600 is heavily used by other applications. I/O >> performance has never been an issue as far as I can tell. The large >> write caches in the CX600 solve the write problem and the host memory >> caching seems to solve the read problem. Your mileage may vary. We >> couldn't do a direct cyrus SAN performance comparision because when we >> moved to this box, we migrated to the SAN and new server hardware at the >> same time, but tests on other systems where we just moved storage on to >> the array saw a huge performance boost vs. locally attached SCSI RAID 5 >> storage with no write cache enabled. > If you're really building 9-disk RAID 5 RAID groups, you might see a pretty big performance hit on a drive failure - until the data has been rebuilt on the hot spare of course. You might want to test this before going this way. -- Ben Carter University of Pittsburgh/CSSD bhc@xxxxxxxx 412-624-6470 ---- Cyrus Home Page: http://asg.web.cmu.edu/cyrus Cyrus Wiki/FAQ: http://cyruswiki.andrew.cmu.edu List Archives/Info: http://asg.web.cmu.edu/cyrus/mailing-list.html