On 4/17/17 3:57 PM, Brian Foster wrote: > On Thu, Apr 13, 2017 at 01:45:43PM -0500, Eric Sandeen wrote: >> Carlos had a case where "find" seemed to start spinning >> forever and never return. >> >> This was on a filesystem with non-default multi-fsb (8k) >> directory blocks, and a fragmented directory with extents >> like this: >> >> 0:[0,133646,2,0] >> 1:[2,195888,1,0] >> 2:[3,195890,1,0] >> 3:[4,195892,1,0] >> 4:[5,195894,1,0] >> 5:[6,195896,1,0] >> 6:[7,195898,1,0] >> 7:[8,195900,1,0] >> 8:[9,195902,1,0] >> 9:[10,195908,1,0] >> 10:[11,195910,1,0] >> 11:[12,195912,1,0] >> 12:[13,195914,1,0] >> ... >> > > This fix seems fine to me, but I'm wondering if this code may have > issues with other kinds of misalignment between the directory blocks and > underlying bmap extents as well. For example, what happens if we end up > with something like the following on an 8k dir fsb fs? > > 0:[0,xxx,3,0] > 1:[3,xxx,1,0] > > ... or ... > > 0:[0,xxx,3,0] > 1:[3,xxx,3,0] Well, as far as that goes it won't be an issue; for 8k dir block sizes we will allocate an extent map with room for 10 extents, and we'll go well beyond the above extents which cross directory block boundaries. > ... > N:[...] > > Am I following correctly that we may end up assuming the wrong mapping > for the second dir fsb and/or possibly skipping blocks? As far as I can tell, this code is only managing the read-ahead state by looking at these cached extents. We keep track of our position within that allocated array of mappings - this bug just stepped off the end while doing so. Stopping at the correct point should keep all of the state consistent and correct. But yeah, it's kind of hairy & hard to read, IMHO. Also as far as I can tell, we handle such discontiguities correctly, other than the bug I found. If you see something that looks suspicious, I'm sure I could tweak my test case to craft a specific situation if there's something you'd like to see tested... -Eric > Brian -- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html