Re: Solving the "metadata too large for circular buffer" condition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



  Taking another look, my estimate of required metadatasize
might be off by a factor of five or so.  128K might be sufficient
for 100 LVs, depending on fragmentation and such.

  Still, the history of computer science is largely a story
of problems caused by people thinking they'd allocated plenty
of space.  Today you can't fdisk a drive or array larger than
2TB because someone thought 32 bits would plenty.  Probably
best to allocate 100 times as much as you think you'll ever
need - a few MB of disk space is cheap.
--
Ray Morris
support@bettercgi.com

Strongbox - The next generation in site security:
http://www.bettercgi.com/strongbox/

Throttlebox - Intelligent Bandwidth Control
http://www.bettercgi.com/throttlebox/

Strongbox / Throttlebox affiliate program:
http://www.bettercgi.com/affiliates/user/register.php


On 11/24/2010 02:28:11 PM, Andrew Gideon wrote:

We've just hit this error, and it is blocking any expansion of existing
or creation of new volumes.

We found:

http://readlist.com/lists/redhat.com/linux-lvm/0/2839.html

which appears to describe a solution. I'm doing some reading, and I've set up a test environment to try things out (before doing anything risky
to production).  But I'm hoping a post here can save some time (and
angst).

First: The referenced thread is two years old. I don't suppose there's a
better way to solve this problem today?

Assuming not...

I'm not sure how metadata is stored.  It seems like, by default, it is
duplicated on each PV. I'm guessing this because one can't just add new PVs (with larger metadatasize values), but one must also remove the old
metadata.  Is that right?

Are there are consequences to removing the metadata from most of the
physical volumes? I've six, so I'd be adding a seventh and eighth (two
for redundancy, though the PVs are all built on RAID sets).

The "pvcreate --restorefile ... --uuid ... --metadatacopies 0" command
would be executed on the existing 6 physical volumes? No data would be lost? I want to be *very* sure of this (so I'm not trashing an existing
PV).

What is the default metadatasize? Judging from lvm.conf, it may be 255. 255 Megabytes? Is there some way to guestimate how much space I should
expect to be using?  I thought perhaps pvdata would help, but this is
apparently LVMv1 only.

[Unfortunately, 'lvm dumpconfig' isn't listing any data in the metadata
block.]

There's also mention in the cited thread of reducing fragmentation using
pvmove.  How would that work?  From what I can see, pvmove will move
segments. But even if two segments are moved from dislocated locations
to immediately adjacent locations, I see nothing which says that these
two segments would be combined into a single segment. So I'm not clear
how fragmentation can be reduced.

Finally, there was mention of changing lvm.conf - presumably,
metadata.dirs - to help make more space. Once lvm.conf is changed, how is that change made live? Is a complete reboot required, or is there a
quicker way?

Thanks for any and all help...

	Andrew

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/




_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux