On 8/23/23 7:03 PM, Qu Wenruo wrote: > > > On 2023/8/24 06:27, Darrick J. Wong wrote: >> On Wed, Aug 23, 2023 at 05:18:02PM -0500, Bill O'Donnell wrote: >>> On Wed, Aug 23, 2023 at 09:46:41AM -0700, Darrick J. Wong wrote: >>>> On Wed, Aug 23, 2023 at 10:43:50AM -0500, Bill O'Donnell wrote: >>>>> xfs_io pwrite issues a series of block size writes, but there is no >>>>> guarantee >>>>> that the resulting extent(s) will be singular or contiguous. > > However this doesn't make much difference, at least for btrfs. > > Btrfs would do the merging emitting the fiemap entry, thus even if the > write didn't result a singular extent, as long as they are contiguous > (under most cases they are) the fiemap result would still be a single one. btrfs may merge physically adjacent extents in fiemap output, but if the write happened to create physically discontiguous extents for the logical range for the single pwrite, I think you would run into the same problem as Bill has described (golden output expects a single extent from fiemap), yes? "under most cases" is the problem here, and results in a flakey test. :) >>>>> This behavior is >>>>> acceptable, but the test is flawed in that it expects a single >>>>> extent for a >>>>> pwrite. > > I'm more interested in if you're hitting any test failure? As Bill noted, yes - under some circumstances the pwrite on XFS creates physically discontiguous extents for the range. (Why it does so is also an interesting question, for later...) <snip> >> Qu: Question for you: > > Thanks a lot for referring it to me. > >> >>>> I don't know why $blocksize is set to 128k above. If this test >>>> needs to >>>> guarantee that there would only be *one* extent (and the golden output >>>> implies this as you note), then it should have been written to say: >>>> >>>> blocksize=$(_get_file_block_size $SCRATCH_MNT) >>>> >>>> But I don't know if the "btrfs soft lock up and return wrong shared >>>> flag" behavior required sharing a (probably multi-block) 128k range, or >>>> if that was simply what the author selected because it reproduced the >>>> problem. > > It's quite sometime ago, thus my memory may not be reliable, but IIRC > the blocksize has no specific requirement other than allowing all > possible blocksize (4K to 64K). > > And at that time, at least I was preferring to use golden output to > detect errors, thus I choose a larger blocksize to allow all blocksizes > to work. Ok, if it was simply "accomodate maximum blocksize" then I like Darrick's suggestion to simply get the actual filesystem block size, and use that. >> >> Any thoughts? >> >> --D >> >>>> >>>>> >>>>> # success, all done >>>>> status=0 >>>>> diff --git a/tests/generic/352.out b/tests/generic/352.out >>>>> index 4ff66c21..ad90ae0d 100644 >>>>> --- a/tests/generic/352.out >>>>> +++ b/tests/generic/352.out >>>>> @@ -1,5 +1,3 @@ >>>>> QA output created by 352 >>>>> wrote 131072/131072 bytes at offset 0 >>>>> XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec) >>>>> -0: [0..2097151]: shared >>>>> -1: [2097152..2097407]: shared|last >>>> >>>> Also I suspect from the test description that the goal here was to >>>> detect the golden output failing because the shared flag does not get >>>> reported correctly. > > Could explain more on why the shared flag detection is not correct here? > > If a file extent is shared, no matter if it's shared by another inode or > not, shouldn't it be marked with SHARED flag? I think that what Darrick meant was that Bill's patch removed "shared" from the golden output, and if the entire point of the test was to detect proper sharing: # This test case will check if reserved extent map searching go # without problem and return correct SHARED flag. # Which btrfs will soft lock up and return wrong shared flag. then it was no longer testing as originally designed. -Eric > Thanks, > Qu