Re: clvmd problems with centos 6.3 or normal clvmd behaviour?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2 Aug 2012 07:07:25 -0600 Corey Kovacs wrte:
> I might be reading this wrong but just in case, I thought I'd point this out.
>
[snip]
> A single node can maintain quorum since 2+3>(9/2).
> In a split brain condition where a single node cannot talk to the other nodes, this could be disastrous.

Thanks for your input, Corey.
As I said before, at this moment I'll have only one node on a site so
I'm also tweaking config to be able to work with one node alone

Anyway I refer to this sentence in manual, also for more than two
nodes configuration (example pertains to a 13 nodes cluster):

"
A cluster must maintain quorum to prevent split-brain issues. If
quorum was not enforced, quorum, a communication error on that same
thirteen-node cluster may cause a situation where six nodes are
operating on the shared storage, while another six nodes are also
operating on it, independently. Because of the communication error,
the two partial-clusters would overwrite areas of the disk and corrupt
the file system. With quorum rules enforced, only one of the partial
clusters can use the shared storage, thus protecting data integrity.
Quorum doesn't prevent split-brain situations, but it does decide who
is dominant and allowed to function in the cluster. Should split-brain
occur, quorum prevents more than one cluster group from doing
anything.
"

This said, in my case my problem is not with quorum, that is gained
when quorum disk becomes master, but with clvmd freezing without
showing any error
As suggested I set up logging for both cluster and lvm.

I also configured lvmetad

The diff between previous lvm.conf and current for further tests is this:
# diff -u lvm.conf lvm.conf.pre020812
--- lvm.conf	2012-08-02 14:48:31.172565731 +0200
+++ lvm.conf.pre020812	2012-08-02 01:33:55.878511113 +0200
@@ -232,8 +232,7 @@

     # Controls the messages sent to stdout or stderr.
     # There are three levels of verbosity, 3 being the most verbose.
-    #verbose = 0
-    verbose = 2
+    verbose = 0

     # Should we send log messages through syslog?
     # 1 is yes; 0 is no.
@@ -242,7 +241,6 @@
     # Should we log error and debug messages to a file?
     # By default there is no log file.
     #file = "/var/log/lvm2.log"
-    file = "/var/log/lvm2.log"

     # Should we overwrite the log file each time the program is run?
     # By default we append.
@@ -251,8 +249,7 @@
     # What level of log messages should we send to the log file and/or syslog?
     # There are 6 syslog-like log levels currently in use - 2 to 7 inclusive.
     # 7 is the most verbose (LOG_DEBUG).
-    #level = 0
-    level = 4
+    level = 0

     # Format of output messages
     # Whether or not (1 or 0) to indent messages according to their severity
@@ -422,8 +419,7 @@
     # Check whether CRC is matching when parsed VG is used multiple times.
     # This is useful to catch unexpected internal cached volume group
     # structure modification. Please only enable for debugging.
-    #detect_internal_vg_cache_corruption = 0
-    detect_internal_vg_cache_corruption = 1
+    detect_internal_vg_cache_corruption = 0

     # If set to 1, no operations that change on-disk metadata will be
permitted.
     # Additionally, read-only commands that encounter metadata in
need of repair
@@ -483,8 +479,7 @@
     # libdevmapper.  Useful for debugging problems with activation.
     # Some of the checks may be expensive, so it's best to use this
     # only when there seems to be a problem.
-    #checks = 0
-    checks = 1
+    checks = 0

     # Set to 0 to disable udev synchronisation (if compiled into the binaries).
     # Processes will not wait for notification from udev.

cluster.conf changes
# diff cluster.conf cluster.conf.51
2,6c2
< <cluster config_version="52" name="clrhev">
< 	<dlm log_debug="1" plock_debug="1"/>
< 	<logging>
< 		<logging_daemon name="qdiskd" debug="on"/>
< 	</logging>
---
> <cluster config_version="51" name="clrhev">

In attach I send two files:
lvm2.log with mark separating before and after issue of clvmd start command
clvmd start output.txt that is the output during "service clvmd start" command

to be able to do so, I started in signle user mode and then started
the services one at a time as in

/etc/rc.d/rc3.d/S*

but anticipating the ssh daemon, so that I'm able to login remotely
In fact after clvmd freezes I can only run a pair of sync commands and
power off....

If I'm not missing something stupid I can also post a bugzilla vs
Centos Bug tracker and then eventually someone will report upstream if
reproducible

Gianluca
[root@crhev3 ~]# service clvmd start
Starting clvmd: 
Activating VG(s):     Logging initialised at Thu Aug  2 15:28:53 2012
      Setting global/umask to 63
    Set umask from 0022 to 0077
      Setting devices/dir to /dev
      Setting global/proc to /proc
      Setting global/activation to 1
      global/suffix not found in config: defaulting to 1
      Setting global/units to h
      Setting activation/readahead to auto
      Setting activation/udev_rules to 1
      Setting activation/udev_sync to 1
      Setting activation/retry_deactivation to 1
      Setting activation/checks to 1
      Setting activation/verify_udev_operations to 0
      Getting driver version
      Setting activation/use_linear_target to 1
      Setting activation/missing_stripe_filler to error
      Setting global/si_unit_consistency to 1
      Setting global/metadata_read_only to 0
      Setting devices/pv_min_size to 2048
      global/detect_internal_vg_cache_corruption() not found in config: defaulting to 0
      Setting global/use_lvmetad to 1
      Setting devices/disable_after_error_count to 0
      Setting devices/ignore_suspended_devices to 0
      Setting devices/cache_dir to /etc/lvm/cache
      Setting devices/write_cache_state to 1
      Setting activation/reserved_stack to 64
      Setting activation/reserved_memory to 8192
      Setting activation/process_priority to -18
      Initialised format: lvm1
      Initialised format: pool
      Initialised format: lvm2
      global/format not found in config: defaulting to lvm2
      Setting response to OK
      Setting protocol to lvmetad
      Setting version to 1
      Initialised segtype: striped
      Initialised segtype: zero
      Initialised segtype: error
      Initialised segtype: free
      Setting dmeventd/snapshot_library to libdevmapper-event-lvm2snapshot.so
      Initialised segtype: snapshot
      Setting dmeventd/mirror_library to libdevmapper-event-lvm2mirror.so
      Initialised segtype: mirror
      dmeventd/raid_library not found in config: defaulting to libdevmapper-event-lvm2raid.so
      Initialised segtype: raid1
      dmeventd/raid_library not found in config: defaulting to libdevmapper-event-lvm2raid.so
      Initialised segtype: raid4
      dmeventd/raid_library not found in config: defaulting to libdevmapper-event-lvm2raid.so
      Initialised segtype: raid5
      dmeventd/raid_library not found in config: defaulting to libdevmapper-event-lvm2raid.so
      Initialised segtype: raid5_la
      dmeventd/raid_library not found in config: defaulting to libdevmapper-event-lvm2raid.so
      Initialised segtype: raid5_ra
      dmeventd/raid_library not found in config: defaulting to libdevmapper-event-lvm2raid.so
      Initialised segtype: raid5_ls
      dmeventd/raid_library not found in config: defaulting to libdevmapper-event-lvm2raid.so
      Initialised segtype: raid5_rs
      dmeventd/raid_library not found in config: defaulting to libdevmapper-event-lvm2raid.so
      Initialised segtype: raid6
      dmeventd/raid_library not found in config: defaulting to libdevmapper-event-lvm2raid.so
      Initialised segtype: raid6_zr
      dmeventd/raid_library not found in config: defaulting to libdevmapper-event-lvm2raid.so
      Initialised segtype: raid6_nr
      dmeventd/raid_library not found in config: defaulting to libdevmapper-event-lvm2raid.so
      Initialised segtype: raid6_nc
      Setting dmeventd/thin_library to libdevmapper-event-lvm2thin.so
      Initialised segtype: thin-pool
      Initialised segtype: thin
      Setting backup/retain_days to 30
      Setting backup/retain_min to 10
      Setting backup/archive_dir to /etc/lvm/archive
      Setting backup/backup_dir to /etc/lvm/backup
      global/fallback_to_lvm1 not found in config: defaulting to 1
      Setting global/locking_type to 3
      Setting global/wait_for_locks to 1
      Cluster locking selected.
    Finding all volume groups
      Setting response to OK
      Setting response to OK
      Setting name to VG_VIRT01
      Setting metadata/format to lvm2
      Setting id to wFFcfY-NCSG-yKrx-Hz8v-tyzD-nTWK-5EqkFz
      Setting format to lvm2
      Setting device to 64768
      Setting dev_size to 209715200
      Setting label_sector to 1
      /dev/mapper/mpatha: size is 209715200 sectors
      /dev/mapper/mpatha: size is 209715200 sectors
      Setting size to 192512
      Setting start to 4096
      Setting ignore to 0
      Setting id to rmEMxt-q8ui-pR8a-NuGa-M15v-DKG9-xKDr45
      Setting format to lvm2
      Setting device to 64771
      Setting dev_size to 314572800
      Setting label_sector to 1
      /dev/mapper/mpathf: size is 314572800 sectors
      /dev/mapper/mpathf: size is 314572800 sectors
      Setting size to 1044480
      Setting start to 4096
      Setting ignore to 0
      Setting response to OK
      Setting name to VG_VIRT03
      Setting metadata/format to lvm2
      Setting id to whFL70-Af2W-oN8B-6rCI-w1Hm-XqL0-tuHNR5
      Setting format to lvm2
      Setting device to 64769
      Setting dev_size to 209715200
      Setting label_sector to 1
      /dev/mapper/mpathc: size is 209715200 sectors
      /dev/mapper/mpathc: size is 209715200 sectors
      Setting size to 192512
      Setting start to 4096
      Setting ignore to 0
      Setting id to s9Yi3G-6jls-30ds-BzdK-djN8-0yHT-VuSI8R
      Setting format to lvm2
      Setting device to 64773
      Setting dev_size to 104857600
      Setting label_sector to 1
      /dev/mapper/mpathe: size is 104857600 sectors
      /dev/mapper/mpathe: size is 104857600 sectors
      Setting size to 1044480
      Setting start to 4096
      Setting ignore to 0
      Setting response to OK
      Setting name to VG_VIRT04
      Setting metadata/format to lvm2
      Setting id to 8crlfj-nuzY-5vaY-QtAb-3SkA-yr9T-EM5Hi2
      Setting format to lvm2
      Setting device to 64774
      Setting dev_size to 209715200
      Setting label_sector to 1
      /dev/mapper/mpathd: size is 209715200 sectors
      /dev/mapper/mpathd: size is 209715200 sectors
      Setting size to 1044480
      Setting start to 4096
      Setting ignore to 0
      Setting response to OK
      Setting name to VolGroup00
      Setting metadata/format to lvm2
      Setting id to dStWwU-rc1a-6pmb-3WJG-2qYr-hgDC-SHm0Ve
      Setting format to lvm2
      Setting device to 26628
      Setting dev_size to 100335616
      Setting label_sector to 1
      /dev/cciss/c0d0p4: size is 100335616 sectors
      /dev/cciss/c0d0p4: size is 100335616 sectors
      Setting size to 1044480
      Setting start to 4096
      Setting ignore to 0
      Setting response to OK
      Setting name to VG_VIRT02
      Setting metadata/format to lvm2
      Setting id to 9375Nc-Lq2y-qkD1-5PLF-cuIO-Z3uN-YW9asm
      Setting format to lvm2
      Setting device to 64770
      Setting dev_size to 209715200
      Setting label_sector to 1
      /dev/mapper/mpathb: size is 209715200 sectors
      /dev/mapper/mpathb: size is 209715200 sectors
      Setting size to 192512
      Setting start to 4096
      Setting ignore to 0
    Finding volume group "VG_VIRT02"
      Locking VG V_VG_VIRT02 CR (VG) (0x1)
      Setting response to OK
      Setting name to VG_VIRT02
      Setting metadata/format to lvm2
      Setting id to 9375Nc-Lq2y-qkD1-5PLF-cuIO-Z3uN-YW9asm
      Setting format to lvm2
      Setting device to 64770
      Setting dev_size to 209715200
      Setting label_sector to 1
      /dev/mapper/mpathb: size is 209715200 sectors
      /dev/mapper/mpathb: size is 209715200 sectors
      Setting size to 192512
      Setting start to 4096
      Setting ignore to 0
      VG_VIRT02/vorastud1 is not active
      Locking LV UYtfUWONARtntMxo16uu464Vy7i6eZo7RPMB1JMB1F20VTBEiKlKtP3ZfbRn9xhn CR (LV|NONBLOCK|CLUSTER|LOCAL) (0xd9)
      VG_VIRT02/ltsp is not active
      Locking LV UYtfUWONARtntMxo16uu464Vy7i6eZo7sW1dCeAZwUfHbpQ55ZZ82Dr2vl7tA009 CR (LV|NONBLOCK|CLUSTER|LOCAL) (0xd9)
      VG_VIRT02/droratest_aaadata is not active
      Locking LV UYtfUWONARtntMxo16uu464Vy7i6eZo72WNqFXJ2i5UKsiRLqdvdQz2TiR0gG24E CR (LV|NONBLOCK|CLUSTER|LOCAL) (0xd9)
    Activated 3 logical volumes in volume group VG_VIRT02
  3 logical volume(s) in volume group "VG_VIRT02" now active
      Requesting sync names.
      Locking VG V_VG_VIRT02 UN (VG) (0x6)

Attachment: lvm2.log
Description: Binary data

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux