This set of patches fixes a few false positives I encountered when testing DAX on ppc64le (which has a 64k page size). Patch 1 is actually not specific to non-4k page sizes. Right now we only test for dax incompatibility in the dm flakey target. This means that tests that use dm-thin or the snapshot target will still try to run. Moving the check to _require_dm_target fixes that problem. Patches 2 and 3 get rid of hard coded block/page sizes in the tests. They run just fine on 64k pages and 64k block sizes. Even after these patches, there are many more tests that fail in the following configuration: MKFS_OPTIONS="-b size=65536 -m reflink=0" MOUNT_OPTIONS="-o dax" One class of failures is tests that create a really small file system size. Some of those tests seem to require the very small size, but others seem like they could live with a slightly bigger size that would then fit the log (the typical failure is a mkfs failure due to not enough blocks for the log). For the former case, I'm tempted to send patches to _notrun those tests, and for the latter, I'd like to bump the file system sizes up. 300MB seems to be large enough to accommodate the log. Would folks be opposed to those approaches? Another class of failure is tests that either hard-code a block size to trigger a specific error case, or that test a multitude of block sizes. I'd like to send a patch to _notrun those tests if there is a user-specified block size. That will require parsing the MKFS_OPTIONS based on the fs type, of course. Is that something that seems reasonable? I will follow up with a series of patches to implement those changes if there is consensus on the approach. These first three seemed straight-forward to me, so that's where I'm starting. Thanks! Jeff [PATCH 1/3] dax/dm: disable testing on devices that don't support dax [PATCH 2/3] t_mmap_collision: fix hard-coded page size [PATCH 3/3] xfs/300: modify test to work on any fs block size