Re: XFS tune to adaptec ASR71605 [SOLVED]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le Tue, 6 May 2014 15:40:37 +0100 (BST)
Steve Brooks <steveb@xxxxxxxxxxxxxxxx> écrivait:

Yes the write speed is very poor.. I am running "bonnie++" again with
your params but no sure how long it will take.

 bonnie++ -f -d ./ -n 50

Will thisno be different for our two machines as it seems to generate
other params depending on the size of the RAM. All ours have 64G RAM.


You MUST test with a dataset bigger than RAM, else you're mostly testing
your RAM speed :) If you've got 64 GB, by default bonnie will test with
128 GB of data. The small size probably explains the very fast seek
speed... You're seeking in the RAM cache :)


I disabled write cache on the controller as there is no ZMM flash
backup module and it seems to be advised that way. I could enable it
and try another test to see if that is a contribution to the poor
write performance. I do have "arcconf" on all our adaptec raid
machines.

Modern RAIDs need write cache or perform abysmally. Do yourself a
service and buy a ZMM. Without write cache it'll be so slow it will be
nearly unusable, really. Did you see the numbers? your RAID is more
than 12x slower than mine... actually slower than a single disk! You'll
simply fail at filling it up at these speeds.

I have not tuned the IO scheduler, read ahead, and queue length?  I
guess I can try this.

echo none > /sys/block/sda/queue/scheduler
echo 1024 > /sys/block/sda/queue/nr_requests
echo 65536 > /sys/block/sda/queue/read_ahead_kb

Will this need to be done after every reboot? I guess it could go in
"/etc/rc.local" ?


Yep. You can tweak the settings and try various configurations.
However these work fine for me in most cases (particularly the noop
scheduler). Of course replace sda with the RAID array device or you may
end up tuning your boot drive instead :)


Thanks loads! As you stated it seems to have all been down to having write cache disabled. The new results from "bonnie++" show that sequential writes when the "write cach" is enabled have massively improved to
1642324 K/s ...


[root@sraid2v tmp]# bonnie++ -f -d ./ -n 50 -u root
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version  1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine  Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
sraid2v  126G           1642324  96 704219  51           2005711  81 529.8  36
Latency                 29978us     131ms             93836us   47600us
Version  1.96 ------Sequential Create------ --------Random Create--------
sraid2v       -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
        files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
           50 45494  75 +++++ +++ 58252  84 51635  80 +++++ +++ 53103  81
Latency       15964us     191us      98us      80us      14us      60us
1.96,1.96,sraid2v,1,1399398521,126G,,,,1642324,96,704219,51,,,2005711,81,529.8,36,50,,,,,45494,75,+++++,+++,58252,84,51635,80,+++++,+++,53103,81,,29978us,131ms,,93836us,47600us,15964us,191us,98us,80us,14us,60us


Thanks again Emmanuel for your help!


Steve
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux