On Friday 20 of December 2013, Kevin Richter wrote: > >> $ cat /sys/block/md2/md/stripe_cache_size > >> 256 > > > > 256 is the default and it is way too low. This is limiting your write > > througput. Increase this to a minimum of 1024 which will give you a > > 20MB stripe cache buffer. This should become active immediately. Add > > it to a startup script to make it permanent. > > $ echo 256 > /sys/block/md2/md/stripe_cache_size > > $ time cp -a /olddisk/testfolder /6tb/foo1/ > > real 25m38.925s > > user 0m0.595s > > sys 1m23.182s > > > > $ echo 1024 > /sys/block/md2/md/stripe_cache_size > > $ time cp -a /olddisk/testfolder /raid/foo2/ > > real 7m32.824s > > user 0m0.438s > > sys 1m6.759s > > > > $ echo 2048 > /sys/block/md2/md/stripe_cache_size > > $ time cp -a /olddisk/testfolder /raid/foo3/ > > real 5m32.847s > > user 0m0.418s > > sys 1m5.671s > > > > $ echo 4096 > /sys/block/md2/md/stripe_cache_size > > $ time cp -a /olddisk/testfolder /raid/foo4/ > > real 5m54.554s > > user 0m0.437s > > sys 1m6.268s > > The difference is really amazing! So 2048 seems to be the best choice. > 60GB in 5,5minutes are 180MB/sek. That sounds a bit high, doesnt it? > The RAID only consist of 5 SATA disks with 7200rpm. I wonder why kernel is giving defaults that everyone repeatly recommends to change/increase? Has anyone tried to bugreport that for stripe_cache_size case? -- Arkadiusz Miśkiewicz, arekm / maven.pl _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs