Re: [RFC v2 02/11] fs/buffer: add a for_each_bh() for block_read_full_folio()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Dec 14, 2024 at 04:02:53AM +0000, Matthew Wilcox wrote:
> On Fri, Dec 13, 2024 at 07:10:40PM -0800, Luis Chamberlain wrote:
> > -	do {
> > +	for_each_bh(bh, head) {
> >  		if (buffer_uptodate(bh))
> >  			continue;
> >  
> > @@ -2454,7 +2464,9 @@ int block_read_full_folio(struct folio *folio, get_block_t *get_block)
> >  				continue;
> >  		}
> >  		arr[nr++] = bh;
> > -	} while (i++, iblock++, (bh = bh->b_this_page) != head);
> > +		i++;
> > +		iblock++;
> > +	}
> 
> This is non-equivalent.  That 'continue' you can see would increment i
> and iblock.  Now it doesn't.

Thanks, not sure how I missed that! With that fix in place I ran a full
baseline against ext4 and all XFS profiles.

For ext4 the new failures I see are just:

  * generic/044
  * generic/045
  * generic/046

For cases where we race writing a file, truncate it and check to verify
if the file is non-zero it should have extents. I'll do a regression test
to see which commit messes this up.

For XFS I've tested 20 XFS proflies (non-LBS) and 4 LBS profiles, and
using the latest kdevops-results-archive test results for "fixes-6.13_2024-12-11"
as the baseline and these paatches + the loop fix you mentioned as a
test I mostly see these I need to look into:

  * xfs/009
  * xfs/059
  * xfs/155
  * xfs/168
  * xfs/185
  * xfs/301
  * generic/753

I'm not sure yet if these are flaky or real. The LBS profiles are using 4k sector sizes.

Also when testing with the xfs 32k secttor size profile generic/470
reveals device mapper needs to be updated to reject larger sector sizes
if it does not yet support it as we do with nvme block driver.

The full set of failures for XFS with 32k sector sizes:

generic/054 generic/055 generic/081 generic/102 generic/172 generic/223
generic/347 generic/405 generic/455 generic/457 generic/482 generic/500
generic/741 xfs/014 xfs/020 xfs/032 xfs/049 xfs/078 xfs/129 xfs/144
xfs/149 xfs/164 xfs/165 xfs/170 xfs/174 xfs/188 xfs/206 xfs/216 xfs/234
xfs/250 xfs/253 xfs/284 xfs/289 xfs/292 xfs/294 xfs/503 xfs/514 xfs/522
xfs/524 xfs/543 xfs/597 xfs/598 xfs/604 xfs/605 xfs/606 xfs/614 xfs/631
xfs/806

The full output I get by comparing the test results from
fixes-6.13_2024-12-11 and the run I just did with inside
kdevops-results-archive:

./bin/compare-results-fstests.py d48182fc621f87bc941ef4445e4585a3891923e9 cd7aa6fc6e46733a5dcf6a10b89566cabe0beaf

Comparing commits:
Baseline:      d48182fc621f | linux-xfs-kpd: Merge tag 'fixes-6.13_2024-12-11' of https://git.kernel.org/pub/scm/linux/kernel/git/djwong/xfs-linux into next-rc
Test:          cd7aa6fc6e46 | linux-xfs-kpd: loop fix noted by willy

Baseline Kernel:6.13.0-rc2+
Test Kernel:   6.13.0-rc2+

Test Results Comparison:
================================================================================

Profile: xfs_crc
  New Failures:
    + xfs/059

Profile: xfs_crc_rtdev_extsize_28k
  New Failures:
    + xfs/301
  Resolved Failures:
    - xfs/185

Profile: xfs_crc_rtdev_extsize_64k
  New Failures:
    + xfs/155
    + xfs/301
  Resolved Failures:
    - xfs/629

Profile: xfs_nocrc
  New Failures:
    + generic/753

Profile: xfs_nocrc_2k
  New Failures:
    + xfs/009

Profile: xfs_nocrc_4k
  New Failures:
    + xfs/301

Profile: xfs_reflink_1024
  New Failures:
    + xfs/168
  Resolved Failures:
    - xfs/033

Profile: xfs_reflink_16k_4ks
  New Failures:
    + xfs/059

Profile: xfs_reflink_8k_4ks
  New Failures:
    + xfs/301

Profile: xfs_reflink_dir_bsize_8k
  New Failures:
    + xfs/301

Profile: xfs_reflink_stripe_len
  New Failures:
    + xfs/301

[0] https://github.com/linux-kdevops/kdevops-results-archive

  Luis




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux