[off list] >>>> Thanks for your eMail, Ross. So, reading all the stuff here I'm really >>>> concerned about moving all our data to such a system. The reason we're >>>> moving is mainly, but not only the longisch fsck UFS (FreeBSD) needs >>>> after a crash. XFS seemed to me to fit perfectly as I never had issues >>>> with fsck here. However, this discussion seems to change my mindset. So, >>>> what would be an alternative (if possible not using hardware RAID >>>> controllers, as already mentioned)? ext3 is not, here we have long fsck >>>> runs, too. Even ext4 seems not too good in this area... >>> I thought 3ware would have been good. Their cards have been praised for >>> quite some time...have things changed? What about Adaptec? >> Well, for me the recommended LSI is okay as it's my favorite vendor, >> too. I used to abandon Adaptec quite a while ago and my optinion was >> confirmed when the OpenBSD vs. Adaptec discussion came up. However, the >> question on the hardware RAID's vendor is totally independent from the >> file system discussion. > > Oh yeah it is. If you use hardware raid, you do not need barriers and > can afford to turn it off for better performance or use LVM for that matter. Hi, this ist off list: Could you please explain me the LVM vs. barrier thing? AFAIU, one should turn off write caches on HDs (in any case), and -- if there's a BBU backed up RAID controller -- use this cache, but turn off barriers. When does LVM come into play here? Thanks in advance! :) >> I re-read XFS's FAQ on this issues, seems to me that we have to set up >> two machines in the lab, one purely software RAID driven, and one with a >> JBOD configured hardware RAID controller, and then benchmark and stress >> testing the setup. > > JBOD? You plan to use software raid with that? Why?! Mainly due to better manageability and monitoring. Honestly, all the proprietary tools are not the best. Timo _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos