>> The issue is 'xfs_wb*iomap_invalid' not getting triggered when we have larger >> bs. I basically increased the blksz in the test based on the underlying bs. >> Maybe there is a better solution than what I proposed, but it fixes the test. > > The only improvement I can think of would be to force-disable large > folios on the file being tested. Large folios mess with testing because > the race depends on write and writeback needing to walk multiple pages. > Right now the pagecache only institutes large folios if the IO patterns > are large IOs, but in theory that could change some day. > Hmm, so we create like a debug parameter to disable large folios while the file is being tested? The only issue is that LBS work needs large folio to be enabled. So I think then the solution is to add a debug parameter to disable large folios for normal blocksizes (bs <= ps) while running the test but disable this test altogether for LBS(bs > ps)? > I suspect that the iomap tracepoint data and possibly > trace_mm_filemap_add_to_page_cache might help figure out what size > folios are actually in use during the invalidation test. > Cool! I will see if I can do some analysis by adding trace_mm_filemap_add_to_page_cache while running the test. > (Perhaps it's time for me to add a 64k bs VM to the test fleet.) > I confirmed with Chandan that Oracle OCI with Ampere supports 64kb page sizes. We (Luis and I) are also looking into running kdevops on XFS with 64kb page size and block size as it might be useful for the LBS work to cross verify the failures.