Am Montag 08 Dezember 2008 schrieb Martin Steigerwald: > Hi! > > I got curious about the recent discussions about the write barrier > feature[1][2]. > > Thus I did my own benchmark this evening. Since I use XFS and tested > with XFS for now only. Write barrier + write cache, no barrier + write > cache, no write barrier + no write cache. I just did tar -xf > linux-2.6.27.tar.bz2 and rm -rf linux-2.6.27. I disabled write cache and barriers for my XFS filesystems on my TP 42 shambhala. During that I did further tests. This time I also include IO scheduler settings. They are default and have been the same during my previous tests. Write cache has about 13 seconds benefit on the barrier enabled XFS for the tar -xf linux-2.6.27.tar.bz2 workload but no benefit at all for the rm -rf linux-2.6.27 workload! And again nobarrier and no cache wins. On a heavily used /home-XFS - I know it should have more free space: martin@shambhala:~ -> LANG=C df -hT /home/ Filesystem Type Size Used Avail Use% Mounted on /dev/sda5 xfs 112G 107G 5.1G 96% /home It has been grown two times - since 6 AGs instead of the default of 4: shambhala:~> xfs_info /home meta-data=/dev/sda5 isize=256 agcount=6, agsize=4883256 blks = sectsz=512 attr=2 data = bsize=4096 blocks=29299536, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 log =internal bsize=4096 blocks=32768, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 IO scheduler settings: martin@shambhala:~> cat /sys/block/sda/queue/scheduler noop anticipatory deadline [cfq] martin@shambhala:~> grep "" /sys/block/sda/queue/iosched/* /sys/block/sda/queue/iosched/back_seek_max:16384 /sys/block/sda/queue/iosched/back_seek_penalty:2 /sys/block/sda/queue/iosched/fifo_expire_async:250 /sys/block/sda/queue/iosched/fifo_expire_sync:123 /sys/block/sda/queue/iosched/quantum:4 /sys/block/sda/queue/iosched/slice_async:40 /sys/block/sda/queue/iosched/slice_async_rq:2 /sys/block/sda/queue/iosched/slice_idle:6 /sys/block/sda/queue/iosched/slice_sync:100 Write barriers and no write cache: shambhala:~> hdparm -W /dev/sda /dev/sda: write-caching = 0 (off) shambhala:> cat /proc/mounts | egrep "(/ |home)" rootfs / rootfs rw 0 0 /dev/disk/by-uuid/7fcd9766-bf1a-426a-8a07-2c3e0c510898 / xfs rw,relatime,attr2,noquota 0 0 /dev/sda5 /home xfs rw,relatime,attr2,logbufs=8,logbsize=256k,noquota 0 0 martin@shambhala:~/Zeit> sync; time tar -xf ~/Linux/Kernel/Mainline/linux-2.6.27.tar.bz2 ; time sync tar -xf ~/Linux/Kernel/Mainline/linux-2.6.27.tar.bz2 36,71s user 4,22s system 36% cpu 1:51,58 total sync 0,00s user 0,05s system 0% cpu 12,689 total martin@shambhala:~/Zeit> sync; time rm -rf linux-2.6.27 ; time sync rm -rf linux-2.6.27 0,06s user 3,90s system 17% cpu 22,906 total sync 0,00s user 0,01s system 6% cpu 0,103 total Write barriers and write cache: shambhala:~> hdparm -W1 /dev/sda /dev/sda: setting drive write-caching to 1 (on) write-caching = 1 (on) martin@shambhala:~/Zeit> sync; time tar -xf ~/Linux/Kernel/Mainline/linux-2.6.27.tar.bz2 ; time sync tar -xf ~/Linux/Kernel/Mainline/linux-2.6.27.tar.bz2 34,38s user 3,91s system 38% cpu 1:38,84 total sync 0,00s user 0,04s system 0% cpu 10,493 total martin@shambhala:~/Zeit> sync; time rm -rf linux-2.6.27 ; time sync rm -rf linux-2.6.27 0,07s user 3,98s system 17% cpu 23,511 total sync 0,00s user 0,01s system 5% cpu 0,126 total No write barriers and no write cache: shambhala:> vim fstab shambhala:> mount -o remount / shambhala:> mount -o remount /home shambhala:> cat /proc/mounts | egrep "(/ |home)" rootfs / rootfs rw 0 0 /dev/disk/by-uuid/7fcd9766-bf1a-426a-8a07-2c3e0c510898 / xfs rw,relatime,attr2,nobarrier,noquota 0 0 /dev/sda5 /home xfs rw,relatime,attr2,nobarrier,logbufs=8,logbsize=256k,noquota 0 0 martin@shambhala:~/Zeit> sync; time tar -xf ~/Linux/Kernel/Mainline/linux-2.6.27.tar.bz2 ; time sync tar -xf ~/Linux/Kernel/Mainline/linux-2.6.27.tar.bz2 30,36s user 3,31s system 48% cpu 1:08,94 total sync 0,00s user 0,08s system 0% cpu 17,462 total martin@shambhala:~/Zeit> sync; time rm -rf linux-2.6.27 ; time sync rm -rf linux-2.6.27 0,07s user 3,87s system 20% cpu 19,172 total sync 0,00s user 0,01s system 4% cpu 0,142 total martin@shambhala:~> date Mo 8. Dez 22:38:06 CET 2008 Ciao, -- Martin 'Helios' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7
Attachment:
signature.asc
Description: This is a digitally signed message part.