On Wed, Aug 26, 2020 at 08:34:32PM +0200, Alberto Garcia wrote: > On Tue 25 Aug 2020 09:47:24 PM CEST, Brian Foster <bfoster@xxxxxxxxxx> wrote: > > My fio fallocates the entire file by default with this command. Is that > > the intent of this particular test? I added --fallocate=none to my test > > runs to incorporate the allocation cost in the I/Os. > > That wasn't intentional, you're right, it should use --fallocate=none (I > don't see a big difference in my test anyway). > > >> The Linux version is 4.19.132-1 from Debian. > > > > Thanks. I don't have LUKS in the mix on my box, but I was running on a > > more recent kernel (Fedora 5.7.15-100). I threw v4.19 on the box and > > saw a bit more of a delta between XFS (~14k iops) and ext4 (~24k). The > > same test shows ~17k iops for XFS and ~19k iops for ext4 on v5.7. If I > > increase the size of the LVM volume from 126G to >1TB, ext4 runs at > > roughly the same rate and XFS closes the gap to around ~19k iops as > > well. I'm not sure what might have changed since v4.19, but care to > > see if this is still an issue on a more recent kernel? > > Ok, I gave 5.7.10-1 a try but I still get similar numbers. > Strange. > Perhaps with a larger filesystem there would be a difference? I don't > know. > Perhaps. I believe Dave mentioned earlier how log size might affect things. I created a 125GB lvm volume and see slight deltas in iops going from testing directly on the block device, to a fully allocated file on XFS/ext4 and then to a preallocated file on XFS/ext4. In both cases the numbers are comparable between XFS and ext4. On XFS, I can reproduce a serious drop in iops if I reduce the default ~64MB log down to 8MB. Perhaps you could try increasing your log ('-lsize=...' at mkfs time) and see if that changes anything? Beyond that, I'd probably try to normalize and simplify your storage stack if you wanted to narrow it down further. E.g., clean format the same bdev for XFS and ext4 and pull out things like LUKS just to rule out any poor interactions. Brian > Berto >