Re: Reserve space for specific thin logical volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There could be a simple answer and complex one :)

I'd start with simple one - already presented here -

when you write to INDIVIDUAL thin volume target - respective dn thin
target DOES manipulate with single btree set - it does NOT care there
are some other snapshot and never influnces them -

You ask here to heavily 'change' thin-pool logic - so writing to THIN
volume A  can remove/influence volume B - this is very problematic for
meny reasons.

We can go into details of BTree updates  (that should be really
discussed with its authors on dm channel ;)) - but I think the key
element is capturing the idea the usage of thinLV A does not change
thinLV B.


----


Now to your free 'reserved' space fiction :)
There is NO way to decide WHO deserves to use the reserve :)

Every thin volume is equal - (the fact we call some thin LV snapshot
is user-land fiction - in kernel all thinLV are just equal -  every
thinLV reference set of thin-pool chunks)  -

(for late-night thinking -  what would be snapshot of snapshot which
is fully overwritten ;))

So when you now see that all thinLVs  just maps set of chunks,
and all thinLVs can be active and running concurrently - how do you
want to use reserves in thin-pool :) ?
When do you decide it ?  (you need to see this is total race-lend)
How do you actually orchestrate locking around this single point of failure ;) ? You will surely come with and idea of having reserve separate for every thinLV ?
How big it should actually be ?
Are you going to 'refill' those reserves  when thin-pool gets emptier ?
How you decide which thinLV deserves bigger reserves ;) ??

I assume you can start to SEE the whole point of this misery....

So instead - you can start with normal thin-pool - keep it simple in kernel,
and solve complexity in user-space.

There you can decide - if you want to extend thin-pool...
You may drop some snapshot...
You may fstrim mounted thinLVs...
You can kill volumes way before the situation becomes unmaintable....

Ok, this is an answer I totally accept: if enable per-lv used and reserved space is so difficult in the current thinp framework, don't do it.

Thanks to taking the time to explain (on late night ;))

All you need to accept is - you will kill them at 95% -
in your world with reserves it would be already reported as 100% full,
with totally unknown size of reserves :)

Minor nitpicking: I am not speaking about "reserves" to use when free space is low, but about "reserved space" - ie: per-volume space which can not be used by any other object.

One question: in a previous email you shown how a threshold can be set to deny new volume/snapshot creation. How can I do that? What LVM version I need?

Thanks.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux