Re: Snapshot behavior on classic LVM vs ThinLVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Il 28-02-2018 22:43 Zdenek Kabelac ha scritto:
On default - full pool starts to 'error' all 'writes' in 60 seconds.

Based on what I remember, and what you wrote below, I think "all writes" in the context above means "writes to unallocated areas", right? Because even full pool can write to already-provisioned areas.

The main problem is - after reboot - this 'missing/unprovisioned'
space may provide some old data...

Can you elaborate on this point? Are you referring to current behavior or to an hypothetical "full read-only" mode?

It still depends - there is always some sort of 'race' - unless you
are willing to 'give-up' too early to be always sure, considering
there are technologies that may write many GB/s...

Sure - this was the "more-or-less" part in my sentence.

You can use rootfs with thinp - it's very fast for testing i.e. upgrades
and quickly revert back - just there should be enough free space.

For testing, sure. However for a production machine I would rarely use root on thinp. Maybe my reasoning is skewed by the fact that I mostly work with virtual machines, so test/heavy upgrades are *not* done on the host itself, rather on the guest VM.


Depends on version of kernel and filesystem in use.

Note RHEL/Centos kernel has lots of backport even when it's look quite old.

Sure, and this is one of the key reason why I use RHEL/CentOS rather than Debian/Ubuntu.

Backups primarily sits on completely different storage.

If you keep backup of data in same pool:

1.)
error on this in single chunk shared by all your backup + origin -
means it's total data loss - especially in case where filesystem are
using 'BTrees' and some 'root node' is lost - can easily render you
origin + all backups completely useless.

2.)
problems in thin-pool metadata can make all your origin+backups just
an unordered mess of chunks.

True, but this not disprove the main point: snapshots are a invaluable tool in building your backup strategy. Obviously, if thin-pool meta volume has a problem, than all volumes (snapshot or not) become invalid. Do you have any recovery strategy in this case? For example, the root ZFS uberblock is written on *both* device start and end. Does something similar exists for thinp?


There are also some on going ideas/projects - one of them was to have
thinLVs with priority to be always fully provisioned - so such thinLV
could never be the one to have unprovisioned chunks....
Other was a better integration of filesystem with 'provisioned' volumes.

Interesting. Can you provide some more information on these projects?
Thanks.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti@assyoma.it - info@assyoma.it
GPG public key ID: FF5F32A8

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux