On Dec 28, 2009, at 12:07 PM, Tom Bishop <bishoptf@xxxxxxxxx> wrote:
Barriers expose the poor performance of cheap hard drives. They provide assurance that all the data leading up to the barrier and the barrier IO itself are committed to media. This means that the barrier does a disk flush first and if the drive supports FUA (forced unit access, ie bypass cache), then issues the IO request FUA, if the drive doesn't support FUA then it issues another cache flush. It's the double flush that causes the most impact to performance. The typical fsync() call only assures data is flushed from memory, but makes no assurance the drive itself has flushed it to disk which is where the concern lies. Currently in RHEL/CentOS the LVM (device mapper) layer doesn't know how to propogate barriers to the underlying devices so it filters them out, so barriers are only currently supported on whole drives or raw partitions. This is fixed in the current kernels, but has yet to be backported to RHEL kernels. There are a couple of ways to avoid the barrier penalty. One is to have nvram backed write-cache either on the contoller or as a separate pass-through device. The other is to use a separate log device on a SSD which has nvram cache, newer ones have capacitor backed cache or a standalone nvram drive. -Ross |
_______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos