Re: max_sectors_kb limitations with VDO and dm-thin

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Apr 19 2019 at 10:40am -0400,
Ryan Norwood <ryan.p.norwood@xxxxxxxxx> wrote:

>    We have been using dm-thin layered above VDO and have noticed that our
>    performance is not optimal for large sequential writes as max_sectors_kb
>    and max_hw_sectors_kb for all thin devices are set to 4k due to the VDO
>    layer beneath.
>    This effectively eliminates the performance optimizations for sequential
>    writes to skip both zeroing and COW overhead when a write fully overlaps a
>    thin chunk as all bios are split into 4k which always be less than the 64k
>    thin chunk minimum.
>    Is this known behavior? Is there any way around this issue?

Are you creating the thin-pool to use a 4K thinp blocksize?  If not,
I'll have to look to see why the block core's block limits stacking
would impose these limits of the underlying data device.

>    We are using RHEL 7.5 with kernel 3.10.0-862.20.2.el7.x86_64.

OK, please let me know what the thin-pool's blocksize is.

Thanks,
Mike

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/dm-devel



[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux