>>>>> "Gionatan" == Gionatan Danti <g.danti@xxxxxxxxxx> writes: Gionatan> Il 2020-09-09 20:47 John Stoffel ha scritto: >> This assumes you're tiering whole files, not at the per-block level >> though, right? Gionatan> The tiered approach I developed and maintained in the past, yes. For any Gionatan> LVM-based tiering, we are speaking about block-level tiering (as LVM Gionatan> itself has no "files" concept). >> Do you have numbers? I'm using DM_CACHE on my home NAS server box, >> and it *does* seem to help, but only in certain cases. I've got a >> 750gb home directory LV with an 80gb lv_cache writethrough cache >> setup. So it's not great on write heavy loads, but it's good in read >> heavy ones, such as kernel compiles where it does make a difference. Gionatan> Numbers for available space for tiering vs cache can vary Gionatan> based on your setup. However, storage tiers generally are at Gionatan> least 5-10X apart from each other (ie: 1 TB SSD for 10 TB Gionatan> HDD). Hence my gut fealing that tiering is not drastically Gionatan> better then lvm cache. But hey - I reserve the right to be Gionatan> totally wrong ;) Very true, numbers talk, annecdotes walk... >> So it's not only the caching being per-file or per-block, but how the >> actual cache is done? writeback is faster, but less reliable if you >> crash. Writethrough is slower, but much more reliable. Gionatan> writeback cache surely is more prone to failure vs Gionatan> writethoug cache. The golden rule is that writeback cache Gionatan> should use a mirrored device (with device-level powerloss Gionatan> protected writeback cache if sync write speed is important). Even in my case I use mirrored SSDs for my cache LVs. It's the only sane thing to do IMHO. Gionatan> But this is somewhat ortogonal to the original question: Gionatan> block-level tiering itself increases the chances of data Gionatan> loss (ie: losing the SSD component will ruin the entire Gionatan> filesystem), so you should used mirrored (or parity) device Gionatan> for tiering also. It does, you really need to have a solid setup in terms of hardware, with known failures modes you can handle, before you start trying to tier blocks. Though maybe a write through block cache would be ok, since the cache would only be for reads, not writes. Which could help if you have a bunch of VMs (or containers, or whatevers) with alot of duplicated data that are all hitting the disk systems at once. _______________________________________________ linux-lvm mailing list linux-lvm@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/