Dne 21.9.2017 v 16:49 Xen napsal(a):
However you would need LVM2 to make sure that only origin volumes are marked
as critical.
'dmeventd' executed binary - which can be a simple bash script called at
threshold level can be tuned to various naming logic.
So far there is no plan to enforce 'naming' or 'tagging' since from user base
feedback we can see numerous ways how to deal with large volume naming
strategies often made by external tools/databases - so enforcing i.e.
specific tag would require changes in larger systems - so when it's compared
with rather simple tuning script of bash script...
I actually think that if I knew how to do multithreading in the kernel, I
could have the solution in place in a day...
If I were in the position to do any such work to begin with... :(.
But you are correct that error target is almost the same thing.
It's the most 'safest' - avoids any sort of further possibly damaging of
filesystem.
Note - typical 'fs' may remount 'ro' at reasonable threshold time - the
precise points depends on workload. If you have 'PB' arrays - surely leaving
5% of free space is rather huge, if you work with GB on fast operation SSD -
taking action at 70% might be better.
If anytime 'during' write users hits 'full pool' - there is currently no
other way then to stop using FS - there are numerous way -
You can replace device with 'error'
You can replace device with 'delay' that splits reads to thin and writes to error
There is just not any way-back - FS should be checked (i.e. full FS could be
'restored' by deleting some files, but in the full thin-pool case 'FS' needs
to get consistent first - so focusing on solving full-pool is like preparing
for missed battle - focus should go into ensuring you not hit full pool and on
the 'sad' occasion of 100% full pool - the worst case scenario is not all the
bad - surely way better then 4 year old experience with old kernel and old
lvm2....
What you would have to implement is to TAKE the space FROM them to
satisfy writing task to your 'active' volume and respect
prioritization...
Not necessary. Reserved space is a metric, not a real thing.
Reserved space by definition is a part of unallocated space.
How is this different from having VG with 1TB where you allocate for your
thin-pool only i.e. 90% for thin-pool and you have 10% of free space for
'extension' of thin-pool for your 'critical' moment.
I'm still not seeing any difference - except you would need to invest lot of
energy into handling of this 'reserved' space inside kernel.
With actual versions of lvm2 you can handle these tasks at user-space and
quite early before you reach 'real' out-of-space condition.
In other words - tuning 'thresholds' in userspace's 'bash' script will
give you very same effect as if you are focusing here on very complex
'kernel' solution.
It's just not very complex.
You thought I wanted space consumption metric for all volumes including
snapshots and then invididual attribution of all consumed space.
Maybe you can try existing proposed solutions first and showing 'weak' points
which are not solvable by it ?
We all agree we could not store 10G thin volume into 1G thin-pool - so there
will be always the case of having 'full pool'.
Either you could handle reserves of 'early' remount-ro or you keep some
'spare' LV/space in VG you attach to thin-pool 'when' needed...
Having such 'great' level of free choice here is IMHO big advantage as it's
always 'admin' to decide how to use available space in the best way - instead
of keeping 'reserves' somewhere hidden in kernel....
Regards
Zdenek
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/