Every now and then LVM is not recognized - LVM2 on RAID10

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear LVM-Users,

i am running into a very interesting situation today, that our LVM metadata is recognized only sometimes. LVM was running fine for some weeks and now from today on running pvscan 10 times one after another i get about 4 positive and 6 negative results (its somehow a coincidence that LVM is recognized).

* WORKING STATE:
server$ pvscan
  PV /dev/md2   VG mainvg   lvm2 [909.59 GB / 299.59 GB free]
  Total: 1 [909.59 GB] / in use: 1 [909.59 GB] / in no VG: 0 [0   ]

* NOT-WORKING STATE:
server$ pvscan
  No matching physical volumes found

notice: Same behaviour with vgscan and lvscan.

Our Setup:
* RAID-10 /dev/md2 on /dev/sd[a-d]5
cat /proc/mdstat
Personalities : [raid1] [raid10]
md2 : active raid10 sdb5[1] sda5[0] sdd5[3] sdc5[2]
      953778688 blocks 64K chunks 2 far-copies [4/4] [UUUU]

* LVM2 on top of /dev/md2 (no separate lvm partition)
lvm metadata backup is present and it is exactly the same as backups we did months ago

I will attach the lvm2 metadata file to this message (vg name: mainvg)

WORKING STATE:
server$ pvscan -vv
      Setting global/locking_type to 1
      File-based locking selected.
      Setting global/locking_dir to /var/lock/lvm
      Locking /var/lock/lvm/P_global WB
    Wiping cache of LVM-capable devices
      /dev/sndstat: stat failed: No such file or directory
    Wiping internal VG cache
    Walking through all physical volumes
      /dev/md2: size is 1907557376 sectors
      /dev/md2: lvm2 label detected
  PV /dev/md2   VG mainvg   lvm2 [909.59 GB / 299.59 GB free]
  Total: 1 [909.59 GB] / in use: 1 [909.59 GB] / in no VG: 0 [0   ]
      Unlocking /var/lock/lvm/P_global

NOT-WORKING STATE:
server$ pvscan -vv
      Setting global/locking_type to 1
      File-based locking selected.
      Setting global/locking_dir to /var/lock/lvm
      Locking /var/lock/lvm/P_global WB
    Wiping cache of LVM-capable devices
      /dev/sndstat: stat failed: No such file or directory
    Wiping internal VG cache
    Walking through all physical volumes
      /dev/md2: size is 1907557376 sectors
      /dev/md2: No label detected
  No matching physical volumes found
      Unlocking /var/lock/lvm/P_global


Any hints?

Where should it look regarding "/dev/md2: No label detected" vs. "/dev/md2: lvm2 label detected"??

It seems like RAID-10 is not sync'ed well maybe? But, mdstat reports all disks are up.

Thanks for your help.
Mat
# Generated by LVM2 version 2.02.38 (2008-06-11): Wed Jul 23 13:16:36 2008

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing 'vgcfgbackup -vvv'"

creation_host = "fileserver.exxs.net"	# Linux fileserver.exxs.net 2.6.18-6-xen-686 #1 SMP Sun Feb 10 22:43:13 UTC 2008 i686
creation_time = 1216811796	# Wed Jul 23 13:16:36 2008

mainvg {
	id = "mNtCkm-qYPB-RzWY-Kfke-asbD-xMDD-7ABoDq"
	seqno = 15
	status = ["RESIZEABLE", "READ", "WRITE"]
	extent_size = 8192		# 4 Megabytes
	max_lv = 0
	max_pv = 0

	physical_volumes {

		pv0 {
			id = "nKql7U-MLbt-5Grp-Hrp7-DOwQ-E2fd-R0n4aD"
			device = "/dev/md2"	# Hint only

			status = ["ALLOCATABLE"]
			dev_size = 1907557376	# 909.594 Gigabytes
			pe_start = 384
			pe_count = 232856	# 909.594 Gigabytes
		}
	}

	logical_volumes {

		yangc-root {
			id = "oSKqz7-2zJK-FXZ3-r79j-mFCS-2oaK-CIIcN7"
			status = ["READ", "WRITE", "VISIBLE"]
			read_ahead = 3072
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 2560	# 10 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 51200
				]
			}
		}

		yangc-imap {
			id = "maitoY-2pNZ-eJt2-YgbA-QXlh-i7WC-xOSwp2"
			status = ["READ", "WRITE", "VISIBLE"]
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 25600	# 100 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 53760
				]
			}
		}

		yangc-home {
			id = "hI0d5Q-GFRV-D3H1-0IiQ-IU8m-bnXw-LE5jdH"
			status = ["READ", "WRITE", "VISIBLE"]
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 20480	# 80 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 79360
				]
			}
		}

		test.exxs.net-disk {
			id = "CjHEEF-viBE-UHvg-qIhW-Muk1-9m53-2tMH00"
			status = ["READ", "WRITE", "VISIBLE"]
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 5120	# 20 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 99840
				]
			}
		}

		yangc-backup {
			id = "WqcTef-vvW1-2tfa-HbUQ-NBjN-zai2-l3dd2F"
			status = ["READ", "WRITE", "VISIBLE"]
			segment_count = 1

			segment1 {
				start_extent = 0
				extent_count = 102400	# 400 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 104960
				]
			}
		}
	}
}
_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux