Re: Looking ahead - tiering with LVM?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 09. 09. 20 v 20:47 John Stoffel napsal(a):
"Gionatan" == Gionatan Danti <g.danti@xxxxxxxxxx> writes:

Gionatan> Il 2020-09-09 17:01 Roy Sigurd Karlsbakk ha scritto:
First, filelevel is usually useless. Say you have 50 VMs with Windows
server something. A lot of them are bound to have a ton of equal
storage in the same areas, but the file size and content will vary
over time. With blocklevel tiering, that could work better.

Gionatan> It really depends on the use case. I applied it to a
Gionatan> fileserver, so working at file level was the right
Gionatan> choice. For VMs (or big files) it is useless, I agree.

This assumes you're tiering whole files, not at the per-block level
though, right?

This is all known.

Gionatan> But the only reason to want tiering vs cache is the
Gionatan> additional space the former provides. If this additional
Gionatan> space is so small (compared to the combined, total volume
Gionatan> space), tiering's advantage shrinks to (almost) nothing.

Do you have numbers?  I'm using DM_CACHE on my home NAS server box,
and it *does* seem to help, but only in certain cases.   I've got a
750gb home directory LV with an 80gb lv_cache writethrough cache
setup.  So it's not great on write heavy loads, but it's good in read
heavy ones, such as kernel compiles where it does make a difference.

So it's not only the caching being per-file or per-block, but how the
actual cache is done?  writeback is faster, but less reliable if you
crash.  Writethrough is slower, but much more reliable.

Hi

dm-cache (--type cache) is  hotspot cache (most used areas of device)

dm-writecache (--type writecache) is great with write-extensive load (somewhat extends your page cache on your NMVe/SSD/persistent-memory)

We were thinking about layering cached above each other - but so far there
was no big demand and also the complexity of solving problem is rising greatly - aka there is no problem to let users to stack cache on top of another cache
on top of 3rd. cache - but what should have when it starts failing...

AFAIK there is no one yet writing driver for combining i.e. SSD + HDD
into a single drive which would be relocating blocks (so you get total size as aproximate sum of both devices) - but there is dm-zoned which solves somewhat similar problem - but I've no experience with that...

Zdenek

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux