On Sun, 16 Feb 2020 20:18:05 -0700 Chris Murphy <lists@xxxxxxxxxxxxxxxxx> wrote: > I don't think file system over accounts for much more than a couple > percent of this, so I'm curious where the slow down might be > happening? The "hosting" Btrfs file system is not busy at all at the > time of the loop mounted filesystem's scrub. I did issue 'echo 3 > > /proc/sys/vm/drop_caches' before the loop mount image being scrubbed, > otherwise I get ~1.72GiB/s scrubs which exceeds the performance of the > SSD (which is in the realm of 550MiB/s max) Try comparing just simple dd read speed of that FS image, compared to dd speed from the underlying device of the host filesystem. With scrubs you might be testing the same metric, but it's a rather elaborate way to do so -- and also to exclude any influence from the loop device driver, or at least to figure out the extent of it. For me on 5.4.20: dd if=zerofile iflag=direct of=/dev/null bs=1M 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 3.68213 s, 583 MB/s dd if=/dev/mapper/cryptohome iflag=direct of=/dev/null bs=1M count=2048 2048+0 records in 2048+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 3.12917 s, 686 MB/s Personally I am not really surprised by this difference, of course going through a filesystem is going to introduce overhead when compared to reading directly from the block device that it sits on. Although briefly testing the same on XFS, it does seem to have less of it, about 6% instead of 15% here. -- With respect, Roman