From: Dave Chinner <dchinner@xxxxxxxxxx> When discontiguous directory buffer support was fixed in xfs_repair, (dd9093d xfs_repair: fix discontiguous directory block support) it changed to using libxfs_getbuf_map() to support mapping discontiguous blocks, and the prefetch code special cased such discontiguous buffers. The issue is that libxfs_getbuf_map() marks all buffers, even contiguous ones - as LIBXFS_B_DISCONTIG, and so the prefetch code was treating every buffer as discontiguous. This causes the prefetch code to completely bypass the large IO optimisations for dense areas of metadata. Because there was no obvious change in performance or IO patterns, this wasn't noticed during performance testing. However, this change mysteriously fixed a regression in xfs/033 in the v3.2.0-alpha release, and this change in behaviour was discovered as part of triaging why it "fixed" the regression. Anyway, restoring the large IO prefetch optimisation results a reapiron a 10 million inode filesystem dropping from 197s to 173s, and the peak IOPS rate in phase 3 dropping from 25,000 to roughly 2,000 by trading off a bandwidth increase of roughly 100% (i.e. 200MB/s to 400MB/s). Phase 4 saw similar changes in IO profile and speed increases. This, however, re-introduces the regression in xfs/033, which will now be fixed in a separate patch. Reported-by: Eric Sandeen <esandeen@xxxxxxxxxx> Signed-off-by: Dave Chinner <dchinner@xxxxxxxxxx> --- libxfs/rdwr.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/libxfs/rdwr.c b/libxfs/rdwr.c index ac7739f..78a9b37 100644 --- a/libxfs/rdwr.c +++ b/libxfs/rdwr.c @@ -590,6 +590,10 @@ libxfs_getbuf_map(struct xfs_buftarg *btp, struct xfs_buf_map *map, struct xfs_bufkey key = {0}; int i; + if (nmaps == 1) + return libxfs_getbuf_flags(btp, map[0].bm_bn, map[0].bm_len, + flags); + key.buftarg = btp; key.blkno = map[0].bm_bn; for (i = 0; i < nmaps; i++) { -- 1.8.4.rc3 _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs