On Thu, Nov 08, 2012 at 11:10:26PM -0800, Linda Walsh wrote: > > > Dave Chinner wrote: > >On Thu, Nov 08, 2012 at 12:30:11PM -0800, Linda Walsh wrote: > >>FWIW, the benefit, probably comes from the read-file, as the written file > >>is written with DIRECT I/O and I can't see that it should make a difference > >>there. > > > >Hmmm, so it does. I think that's probably the bug that needs to be > >fixed, not so much using posix_fadvise.... > --- > Well... using direct I/O might be another way of fixing it... > but I notice that neither the reads nor the writes seem to use the optimal > I/O size that takes into consideration RAID alignment. It aligns for memory > alignment and aligns for a 2-4k device alignment, but doesn't seem to take > into consideration minor things like a 64k strip-unit x 12-wide-data-width > (768k).. if you do direct I/O. might want to be sure to RAID align it... Sure, you can get that information from the fs geometry ioctl. > Doing <64k at a time would cause heinous perf... while using #define BUFFER_MAX (1<<24) .... blksz_dio = min(dio.d_maxiosz, BUFFER_MAX - pagesize); if (argv_blksz_dio != 0) blksz_dio = min(argv_blksz_dio, blksz_dio); blksz_dio = (min(statp->bs_size, blksz_dio) / dio_min) * dio_min so, the buffer size starts at 16MB, and ends up the minimum of the buffer size and the file size. As can be seen here: /mnt/test/foo extents=6 can_save=1 tmp=/mnt/test/.fsr4188 DEBUG: fsize=17825792 blsz_dio=16773120 d_min=512 d_max=2147483136 pgsz=4096 So, really, if you want to change that to be stripe width aligned, you could quite easily do that... However, if you really wanted to increase fsr throughput, using AIO and keeping multiple IOs in flight at once woul dbe a much better option as it would avoid the serialised read-write-read-write-... pattern tha limits the throughput now... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs