Re: Why is the performance of my lvmthin snapshot so poor

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 16, 2022 at 03:22:09PM +0200, Gionatan Danti wrote:
> Il 2022-06-16 09:53 Demi Marie Obenour ha scritto:
> > That seems reasonable.  My conclusion is that dm-thin (which is what LVM
> > uses) is not a good fit for workloads with a lot of small random writes
> > and frequent snapshots, due to the 64k minimum chunk size.  This also
> > explains why dm-thin does not allow smaller blocks: not only would it
> > only support very small thin pools, it would also have massive metadata
> > write overhead.  Hopefully dm-thin v2 will improve the situation.
> 
> I think that, in this case, no free lunch really exists. I tried the
> following thin provisioning methods, each with its strong & weak points:
> 
> lvmthin: probably the more flexible of the mainline kernel options. You pay
> for r/m/w only when allocating a small block (say 4K) the first time after
> taking a snapshot. It is fast and well integrated with lvm command line.
> Con: bad behavior on out-of-space condition

Also, the LVM command line is slow, and there is very large write
amplification with lots of random writes immediately after taking a
snapshot.  Furthermore, because of the mismatch between the dm-thin
block size and the filesystem block size, fstrim might not reclaim as
much space in the pool as one would expect.

> xfs + reflink: a great, simple to use tool when applicable. It has a very
> small granularity (4K) with no r/m/w. Cons: requires fine tuning for good
> performance when reflinking big files; IO freezes during metadata copy for
> reflink; a very small granularity means sequential IO is going to suffer
> heavily (see here for more details:
> https://marc.info/?l=linux-xfs&m=157891132109888&w=2)

Also heavy fragmentation can make journal replay very slow, to the point
of taking days on spinning hard drives.  Dave Chinner explains this here:
https://lore.kernel.org/linux-xfs/20220509230918.GP1098723@xxxxxxxxxxxxxxxxxxx/.

> btrfs: very small granularity (4K) and many integrated features. Cons: bad
> performance overall, especially when using mechanical HDD

Also poor out-of-space handling and unbounded worst-case latency.

> vdo: is provides small granularity (4K) thin provisioning, compression and
> deduplication. Cons: (still) out-of-tree; requires a powerloss protected
> writeback cache to maintain good performance; no snapshot capability
> 
> zfs: designed for the ground up for pervasive CoW, with many features and
> ARC/L2ARC. Cons: out-of-tree; using small granularity (4K) means bad overall
> performance; using big granularity (128K by default) is a necessary
> compromise for most HDD pools.

Is this still a problem on NVMe storage?  HDDs will not really be fast
no matter what one does, at least unless there is a write-back cache
that can convert random I/O to sequential I/O.  Even that only helps
much if your working set fits in cache, or if your workload is
write-mostly.

> For what it is worth, I settled on ZFS when using out-of-tree modules is not
> an issue and lvmthin otherwise (but I plan to use xfs + reflink more in the
> future).
> 
> Do you have any information to share about dm-thin v2? I heard about it some
> years ago, but I found no recent info.

It does not exist yet.  Joe Thornber would be the person to ask
regarding any plans to create it.

-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

Attachment: signature.asc
Description: PGP signature

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux