Re: Please help! Lost my LVM VG...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mikkel L. Ellertson wrote:
Strange - /var/lock/lvm is empty, and its date does not correspond

It's always empty unless an LVM tool is running (or you've disabled locking or are using some non-local locking mode for all your VGs).

Try running e.g. vgchange in a debugger. Set a breakpoint on vgchange_single and go look in that directory when the process breaks on that symbol. E.g:

(gdb) break vgchange_single
Breakpoint 1 at 0x4228b0: file vgchange.c, line 512.
(gdb) r -ay tvg0
Starting program: /sbin/vgchange -ay tvg0
[Thread debugging using libthread_db enabled]
File descriptor 3 left open
File descriptor 4 left open
File descriptor 5 left open
[New Thread 0x7fc245280780 (LWP 2581)]

Breakpoint 1, vgchange_single (cmd=0x2646500, vg_name=0x265f3e0 "tvg0", vg=0x265fda0, consistent=1, handle=0x0) at vgchange.c:512
512	{
Missing separate debuginfos, use: debuginfo-install glibc.x86_64 libselinux.x86_64 libsepol.x86_64 ncurses.x86_64 readline.x86_64
(gdb)
[1]+  Stopped                 gdb /sbin/vgchange
# ls /var/lock/lvm/
V_tvg0
# ll -i /var/lock/lvm/V_tvg0
88047 -rwx------ 1 root root 0 2009-02-12 20:49 /var/lock/lvm/V_tvg0
# grep 88047 /proc/locks
1: FLOCK  ADVISORY  READ  2581 fd:01:88047 0 EOF

You're can also confirm this by inspecting the code in lib/locking/file_locking.c in the LVM2 sources.

to the last boot time. The date on the directory, as well as
/etc/lvm/cache/.cache match up to when LVM was last updated.

The cache file is a list of LVM capable devices that pass the filters defined in lvm.conf: nothing more. It's simply an optimisation to avoid needless scanning of entries in /dev.

Just take a look at the file:

    /etc/lvm/cache/.cache
    # This file is automatically maintained by lvm.

    persistent_filter_cache {
            valid_devices=[
                     "/dev/dm-6",
                     "/dev/ram11"
	    	     ...
    	    ]
    }

Nothing stored in here about activation. See also the comments in lvm.conf:

    # The results of the filtering are cached on disk to avoid
    # rescanning dud devices (which can take a very long time).
    # By default this cache is stored in the /etc/lvm/cache directory
    # in a file called '.cache'.
    # It is safe to delete the contents: the tools regenerate it.
    # (The old setting 'cache' is still respected if neither of
    # these new ones is present.)
    cache_dir = "/etc/lvm/cache"
    cache_file_prefix = ""

It's *always* safe to delete the file since it can always be regenerated by the tools - this would not be true if it stored "activation" flags for VGs (you'd fail to activate them on a reboot).

On the other hand, I may be wrong about a file being updated. It
looks like there may be a bit set on the LV itself. (I am going to

No. There's nothing in the LVM metadata for controlling this (unless you're thinking of the exported flag which doesn't come into play here since it must be set/cleared by the administrator) - take a look at the metadata files in /etc/lvm/{archive,backup}.

have to refresh my memory.) After further reading, the OP could have
run the command without remounting / rw. He could have run:

vgchange -ay --ignorelockingfailure VolTerabytes00

Why bother when you can remount the fs and have working locking?

The --ignorelockingfailure flag is only intended to allow activation of VGs during boot time, e.g. in a clustered environment when the daemons required to support the cluster infrastructure are not yet running.

See the recent discussion of the proposed new implementation of this option on lvm-devel and the discussion around whether the configuration file equivalent should be renamed as "boottime_locking".

In any case, it would be interesting to have the OP reboot, and see
if the VG is active on reboot.

Read the thread :)

The OP has now rebooted several times and the VG has been correctly activated each time. I am guessing that there was a timing issue and the underlying PVs were not present at the time the vgchange commands in rc.sysinit ran but without logs it's just speculation.

Regards,
Bryn.

--
fedora-list mailing list
fedora-list@xxxxxxxxxx
To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-list
Guidelines: http://fedoraproject.org/wiki/Communicate/MailingListGuidelines
[Index of Archives]     [Older Fedora Users]     [Fedora Announce]     [Fedora Package Announce]     [EPEL Announce]     [Fedora Magazine]     [Fedora News]     [Fedora Summer Coding]     [Fedora Laptop]     [Fedora Cloud]     [Fedora Advisory Board]     [Fedora Education]     [Fedora Security]     [Fedora Scitech]     [Fedora Robotics]     [Fedora Maintainers]     [Fedora Infrastructure]     [Fedora Websites]     [Anaconda Devel]     [Fedora Devel Java]     [Fedora Legacy]     [Fedora Desktop]     [Fedora Fonts]     [ATA RAID]     [Fedora Marketing]     [Fedora Management Tools]     [Fedora Mentors]     [SSH]     [Fedora Package Review]     [Fedora R Devel]     [Fedora PHP Devel]     [Kickstart]     [Fedora Music]     [Fedora Packaging]     [Centos]     [Fedora SELinux]     [Fedora Legal]     [Fedora Kernel]     [Fedora OCaml]     [Coolkey]     [Virtualization Tools]     [ET Management Tools]     [Yum Users]     [Tux]     [Yosemite News]     [Gnome Users]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Asterisk PBX]     [Fedora Sparc]     [Fedora Universal Network Connector]     [Libvirt Users]     [Fedora ARM]

  Powered by Linux