Re: Reserve space for specific thin logical volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Zdenek,

lvm2 is using  upstream community BZ located here:

https://bugzilla.redhat.com/enter_bug.cgi?product=LVM%20and%20device-mapper

You can check RHBZ easily for all lvm2 bZ
(mixes  RHEL/Fedora/Upstream)

We usually want to have upstream BZ being linked with Community BZ,
but sometimes it's driven through other channel - not ideal - but still easily search-able.

Yes, it's a place where problems are discussed. Thanks for your reminder :)

[...snip...]
It's should be opposite case - unless something regressed recently...
Easiest is to write out  lvm2 test suite some test.

And eventually bisect which commit broke it...

Good to know! I will find time to test different versions on both openSUSE and Fedora.


- pvmove is slow. I know it's not fault of LVM. The time is almost spent in DM (the IO dispatch/copy).

Yeah - this is more or less design issue inside kernel - there are
some workarounds - but since primary motivation was not to overload
system - it's been left a sleep a bit - since focus gained  'raid' target

Aha, it's a good reason. Ideally, it would be good for pvmove having some option to control
the IO rate. I know it's not easy...

and these pvmove fixes are working with old dm mirror target...
(i.e. try to use bigger  region_size for mirror in lvm.conf  (over 512K)
and evaluate performance - there is something wrong - but core mirror developer is busy with raid features ATM....

Thanks for the suggestion.



- snapshot cannot be used in cluster environment. There is a usecase: user has a central backup system

Well, snapshot CANNOT work in cluster.
What you can do is to split snapshot and attach it different volume,
but exclusive assess is simply required - there is no synchronization of changes like with cmirrord for old mirror....

Got it! Advanced features like snapshot/thinp/dmcache, have their own metadata. The payment for having those metadata
changes cluster-aware is painful.

We do our best....

Like you guys have been always doing, thanks!

Regards,
Eric

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux