ext4, barrier, md/RAID1 and write cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




I've been having some NFS performance issues, and have been
experimenting with the server filesystem (ext4) to see if that is a factor.

The setup is like this:

(Debian 6, kernel 2.6.39)
2x SATA drive (NCQ, 32MB cache, no hardware RAID)
md RAID1
LVM
ext4

a) If I use data=ordered,barrier=1 and `hdparm -W 1' on the drive, I
observe write performance over NFS of 1MB/sec (unpacking a big source
tarball)

b) If I use data=writeback,barrier=0 and `hdparm -W 1' on the drive, I
observe write performance over NFS of 10MB/sec

c) If I just use the async option on NFS, I observe up to 30MB/sec

I believe (b) and (c) are not considered safe against filesystem
corruption, so I can't use them in practice.

Can anyone suggest where I should direct my efforts to lift performance?
 E.g.

- does SCSI work better with barriers, will buying SCSI drives just
solve the problem using config (a)?

- should I do away with md RAID and consider btrfs which does RAID1
within the filesystem itself?

- or must I just use option (b) but make it safer with battery-backed
write cache?

- or is there any md or lvm issue that can be tuned or fixed by
upgrading the kernel?
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux