dev loop ~23% slower?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I see an approximately 23% reduction in performance through a loop
device. Is it expected? This is kernel 5.5.3.

The setup is SSD, plain partitions, no LVM or encryption, Btrfs. This
scrub performance is typical.

$ sudo btrfs scrub status /
UUID:             b8e290d5-1dc5-429f-8201-10ca5b2c0b95
Scrub started:    Sun Feb 16 19:39:01 2020
Status:           finished
Duration:         0:00:54
Total to scrub:   28.00GiB
Rate:             531.06MiB/s
Error summary:    no errors found
[chris@fmac ~]$

On this file system is a sparse file, chattr +C is set, and it's
attached to /dev/loop0 and mounted at /mnt.

$ sudo btrfs scrub status /mnt
UUID:             63a7e2b9-6a5e-4e94-9cc9-f90d01de7541
Scrub started:    Sun Feb 16 20:06:51 2020
Status:           finished
Duration:         0:00:13
Total to scrub:   5.15GiB
Rate:             405.79MiB/s
Error summary:    no errors found
$

I don't think file system over accounts for much more than a couple
percent of this, so I'm curious where the slow down might be
happening? The "hosting" Btrfs file system is not busy at all at the
time of the loop mounted filesystem's scrub. I did issue 'echo 3 >
/proc/sys/vm/drop_caches' before the loop mount image being scrubbed,
otherwise I get ~1.72GiB/s scrubs which exceeds the performance of the
SSD (which is in the realm of 550MiB/s max)


Thanks,

-- 
Chris Murphy



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux