Re: Snapshot behavior on classic LVM vs ThinLVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dne 22.4.2017 v 09:14 Gionatan Danti napsal(a):
Il 14-04-2017 10:24 Zdenek Kabelac ha scritto:
However there are many different solutions for different problems -
and with current script execution - user may build his own solution -
i.e.  call
'dmsetup remove -f' for running thin volumes - so all instances get
'error' device   when pool is above some threshold setting (just like
old 'snapshot' invalidation worked) - this way user will just kill
thin volume user task, but will still keep thin-pool usable for easy
maintenance.


This is a very good idea - I tried it and it indeed works.

However, it is not very clear to me what is the best method to monitor the allocated space and trigger an appropriate user script (I understand that versione > .169 has %checkpoint scripts, but current RHEL 7.3 is on .166).

I had the following ideas:
1) monitor the syslog for the "WARNING pool is dd.dd% full" message;
2) set a higher than 0 low_water_mark and cache the dmesg/syslog "out-of-data" message;
3) register with device mapper to be notified.

What do you think is the better approach? If trying to register with device mapper, how can I accomplish that?

One more thing: from device-mapper docs (and indeed as observerd in my tests), the "pool is dd.dd% full" message is raised one single time: if a message is raised, the pool is emptied and refilled, no new messages are generated. The only method I found to let the system re-generate the message is to deactiveate and reactivate the thin pool itself.


ATM there is even bug for 169 & 170 - dmeventd should generate message
at 80,85,90,95,100 - but it does it only once - will be fixed soon...

~16G so you can't even extend it, simply because it's
unsupported to use any bigger size

Just out of curiosity, in such a case, how to proceed further to regain access to data?

And now the most burning question ... ;)
Given that thin-pool is under monitor and never allowed to fill data/metadata space, as do you consider its overall stability vs classical thick LVM?

Not seen metadata error for quite long time...
Since all the updates are CRC32 protected it's quite solid.

Regards

Zdenek

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux