When testing with an SSD with a 4k logical sector size, I ran into a number of test failures that were caused by assumption that the storage device can support 1k block sizes. Fix this by skipping those tests, or in the case of generic/563, make sure the loop device has the same block size as the backing scratch device. (Arguably losetup should do that, but at least today, it doesn't.) This test series was tested using: gce-xfstests --local-ssd-nvme -c ext4,xfs,btrfs -g auto and comparing it against the results of running the same set of tests without the --local-ssd-nvme option, which introduces the use of a 4k sector storage device. With these patches applied, there is one remaining failure with xfs/157 but I'm not sure how to deal with it, since Google searches regarding the failure message just simply say "Don't try to use a regular file as a logdev", which isn't particularly helpful here. Maybe the right thing is to just hard code a _notrun if "blockdev --getss $SCRATCH_DEV" is not 512? It's not clear to me, so I've left it. I also noted two new failures with btrfs, generic/175 and generic/251. The cause of why these tests are failing with a 4k sector device is not obvious to me. But certainly things are much better with this patch, and perhaps the the btrfs and xfs developers can address these last new test failures if they care about this particular test scenario. Theodore Ts'o (2): common: check if the scratch device can support 1024 block sizes generic/563: create the loop dev with the same block size as the scratch dev common/rc | 22 ++++++++++++++++++++-- tests/ext4/055 | 1 + tests/generic/563 | 2 +- tests/xfs/205 | 1 + tests/xfs/432 | 1 + tests/xfs/516 | 1 + 6 files changed, 25 insertions(+), 3 deletions(-) -- 2.31.0