Hi,
thanks for your suggestions.
> My other advices : generally don't use cfq scheduler with a RAID
> controller, it will defeat the whole purpose of RAID cache and command
> reordering abilities. Use noop, generally, and deepen the queue :
>
> echo "noop"> /sys/block/sdc/queue/scheduler
> echo 512> /sys/block/sdc/queue/nr_requests
Is there a way to make this static to this disk?
> Don't hesitate to enlarge tremendously the read-ahead cache, too. I
> generally use about 512 to 1024 sectors per drive as a rule of the
> thumb, so a 4 drives array will use 2048 to 4096 :
>
> blockdev --setra 4096 /dev/sdc
Is there also a way to make this static?
> You should see a 100% write/read speed improvement with these
> parameters.
That would be great.
Thanks Stefan
Am 26.10.2010 13:25, schrieb Emmanuel Florac:
Le Tue, 26 Oct 2010 13:03:07 +0200
Stefan Priebe - Profihost AG<s.priebe@xxxxxxxxxxxx> écrivait:
it is a 9650SE-8LPML with Firmware: FE9X 3.06.00.003.
Augh, this firmware is antique :) The latest is 4.10.xx.xx. Anyway,
don't use firmware before 3.08.xx.xx, there were some nasty bugs.
So you mean i should upgrade to 4.x Firmware?
Definitely. It will much improve performances, too. Simply download the
firmware file, extract it, and flash the controller with tw_cli :
tw_cli /cXX update fw=prom0006.img
(the firmware is always in the prom00xx.img file).
Do i then have to do a
filesystem repair? Or just wait if the error accours again?
No, the filesystem will be fine. However you should start a RAID scrub
with
tw_cli /cXX/uXX start verify
This will rebuild the parity with the new 4.X format (faster writes)
and help detect any hardware fault.
My other advices : generally don't use cfq scheduler with a RAID
controller, it will defeat the whole purpose of RAID cache and command
reordering abilities. Use noop, generally, and deepen the queue :
echo "noop"> /sys/block/sdc/queue/scheduler
echo 512> /sys/block/sdc/queue/nr_requests
Don't hesitate to enlarge tremendously the read-ahead cache, too. I
generally use about 512 to 1024 sectors per drive as a rule of the
thumb, so a 4 drives array will use 2048 to 4096 :
blockdev --setra 4096 /dev/sdc
You should see a 100% write/read speed improvement with these
parameters.
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs