rkj@xxxxxxxxxxxx wrote:
I am working with hardware RAID0 using LSI 9271-8i + 8 SSD's...
... With Direct IO, the sequential writes are around
3.5 GB/s but I noticed a drop-off in sequential reads for smaller record
sizes.
----
1) Are you using the realtime partitions?
2) are you using an empty file space with max log size
3) what are you using to do write's with, "dd"?
4) have you tried pre-allocating the space, running xfs_fsr on the
file, to get it to 1 extent in size,
then run dd with the conv=nocreat,notrunc option?
Those are some things to try for maximums.
I notice ~30% hit if I go through the linux buffer system as well.
Have you tried different write sizes?
i.e. 32M, 64M, 128 256... i.e. I notice optimal sizes after
which things fall off...
do you run 'time' on "dd" as well and see how much cpu it is using?
If you use bash, I find setting TIMEFORMAT to:
"%2Rsec %2Uusr %2Ssys (%P%% cpu)"
to be a useful format (1 line, and shows cpu % usage)...
just some random thoughts...
_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs