Il 2022-06-16 18:19 Demi Marie Obenour ha scritto:
Also heavy fragmentation can make journal replay very slow, to the
point
of taking days on spinning hard drives. Dave Chinner explains this
here:
https://lore.kernel.org/linux-xfs/20220509230918.GP1098723@xxxxxxxxxxxxxxxxxxx/.
Thanks, the linked thread was very interesting.
Also poor out-of-space handling and unbounded worst-case latency.
Very true.
Is this still a problem on NVMe storage? HDDs will not really be fast
no matter what one does, at least unless there is a write-back cache
that can convert random I/O to sequential I/O. Even that only helps
much if your working set fits in cache, or if your workload is
write-mostly.
One of the key features of ZFS is to transform random writes into
sequential ones. With the right recordsize, and coupled with prefetch,
compressed ARC and L2ARC, even HDD pool can be surprisingly usable.
For NVMe pools you should use a much lower recordsize to avoid
read/write amplification, but not lower than 16K to not impair
compression efficiency (unless you are storing mostly uncompressible
stuff). That said, for pure NVMe storage (no compression or other data
transformations) I think XFS, possibly with direct IO, is the fastest
choice by a factor of 2x.
It does not exist yet. Joe Thornber would be the person to ask
regarding any plans to create it.
Ok - I was hoping to miss something, but it is not the case.
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/