Il 2020-09-09 21:41 Roy Sigurd Karlsbakk ha scritto:
First, filelevel is usually useless. Say you have 50 VMs with Windows
server something. A lot of them are bound to have a ton of equal
If you look at IOPS instead of just sequencial speed, you'll see the
difference. A set of 10 drives in a RAID-6 will perhaps, maybe, give
you 1kIOPS, while a single SSD might give you 50kIOPS or even more.
This makes a huge impact.
IOPs are already well server by LVM cache. So, I genuinely ask: what
would be tiering advantage here? I'll love to ear a reasonable use
case.
LVMcache only helps if the cache is there in the first place and IIRC
it's cleared after a reboot.
I seem to remember that cache is persistent, but a writeback cache must
be flushed to the underlying disk in case of unclean shutdown. This,
however, should not empty the cache itself.
It help won't that much over time with large storage. It also wastes
space.
I tend to disagree: with large storage, you really want an hotspot cache
vs a tiered approach unless:
a) the storage tiers are comparable in size, which is quite rare;
b) the slow storage does some sort of offline compression/deduplication,
with the faster layer being a landing zone for newly ingested data.
Can you describe a reasonable real-world setup where plain LVM tiering
would be useful? Again, this is a genuine question: I am interested in
different storage setup than mine.
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/