Re: cLVM: LVM commands take severl minutes to complete

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Vladislav Bogdanov <bubble@xxxxxxxxxxxxx> writes:

> expected_votes is by default inherited from nodelist so you don't need
> it. last_man_standing is better to remove, it's not needed as well.
>
> You can try to run clvmd off-cluster with debug to console and run lvm
> tools also with debug to get a picture. Please ping me after holydays
> if you need help on how to do that.

Thanks.

The cluster is in production, so I can do many things.

I ran “clvmd -S” and it makes it work again.

Then I could extend my VG, my LV and grow my GFS2, but several minutes
later, I had a kernel panic:

Sep 16 15:46:28 nebula3 kernel: [442791.286867] BUG: unable to handle kernel NULL pointer dereference at 0000000000000028
Sep 16 15:46:28 nebula3 kernel: [442791.293096] IP: [<ffffffffa05d613c>] gfs2_rbm_find+0xac/0x530 [gfs2]
Sep 16 15:46:28 nebula3 kernel: [442791.296507] PGD 0
Sep 16 15:46:28 nebula3 kernel: [442791.299815] Oops: 0000 [#1] SMP
Sep 16 15:46:28 nebula3 kernel: [442791.303098] Modules linked in: vhost_net vhost macvtap macvlan gfs2 dlm sctp configfs ip6table_filter ip6_tables iptable_filter ip_tables x_tables openvswitch gre vxlan ip_tunnel nfsd auth_rpcgss nfs_acl nfs lockd sunrpc fscache bonding scsi_dh_emc dm_round_robin ipmi_devintf gpio_ich dcdbas x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel aes_x86_64 lrw gf128mul glue_helper joydev dm_multipath ablk_helper scsi_dh cryptd sb_edac edac_core shpchp mei_me mei lpc_ich mac_hid ipmi_si acpi_power_meter wmi iTCO_wdt iTCO_vendor_support hid_generic usbhid hid ses enclosure qla2xxx ahci libahci scsi_transport_fc bnx2x tg3 scsi_tgt ptp pps_core megaraid_sas mdio libcrc32c
Sep 16 15:46:28 nebula3 kernel: [442791.343434] CPU: 14 PID: 27504 Comm: qemu-system-x86 Tainted: G        W     3.13.0-63-generic #103-Ubuntu
Sep 16 15:46:28 nebula3 kernel: [442791.352841] Hardware name: Dell Inc. PowerEdge M620/0T36VK, BIOS 2.2.7 01/21/2014
Sep 16 15:46:28 nebula3 kernel: [442791.362631] task: ffff880035ddc800 ti: ffff8801303ce000 task.ti: ffff8801303ce000
Sep 16 15:46:28 nebula3 kernel: [442791.372920] RIP: 0010:[<ffffffffa05d613c>]  [<ffffffffa05d613c>] gfs2_rbm_find+0xac/0x530 [gfs2]
Sep 16 15:46:28 nebula3 kernel: [442791.383625] RSP: 0018:ffff8801303cfae8  EFLAGS: 00010246
Sep 16 15:46:28 nebula3 kernel: [442791.389101] RAX: 0000000000000080 RBX: ffff8801303cfbd0 RCX: ffff880bf4def6e8
Sep 16 15:46:28 nebula3 kernel: [442791.400133] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000005
Sep 16 15:46:28 nebula3 kernel: [442791.411734] RBP: ffff8801303cfb60 R08: 0000000000000001 R09: ffff8800b8d5b850
Sep 16 15:46:28 nebula3 kernel: [442791.423628] R10: 0000000000020328 R11: 000000000005e0c7 R12: 0000000000000000
Sep 16 15:46:28 nebula3 kernel: [442791.435582] R13: ffffffffffffffff R14: ffff8800b8d5b850 R15: ffff880bdf594000
Sep 16 15:46:28 nebula3 kernel: [442791.447748] FS:  00007f22157fa700(0000) GS:ffff880c0fae0000(0000) knlGS:0000000000000000
Sep 16 15:46:28 nebula3 kernel: [442791.460205] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Sep 16 15:46:28 nebula3 kernel: [442791.466482] CR2: 0000000000000028 CR3: 00000001698f4000 CR4: 00000000001427e0
Sep 16 15:46:28 nebula3 kernel: [442791.478799] Stack:
Sep 16 15:46:28 nebula3 kernel: [442791.484741]  ffff88017966dbf8 ffff8801303cfb10 ffffffffa05c098e 000000002934a8b0
Sep 16 15:46:28 nebula3 kernel: [442791.496710]  ffff88112934a860 0000000000000005 0000000000012190 ffff880bed95b000
Sep 16 15:46:28 nebula3 kernel: [442791.508660]  0000000000000000 ffff88112934a890 0000000023dd91c3 0000000000000000
Sep 16 15:46:28 nebula3 kernel: [442791.520688] Call Trace:
Sep 16 15:46:28 nebula3 kernel: [442791.526551]  [<ffffffffa05c098e>] ? gfs2_glock_wait+0x3e/0x80 [gfs2]
Sep 16 15:46:28 nebula3 kernel: [442791.532464]  [<ffffffffa05d8089>] gfs2_inplace_reserve+0x459/0x9e0 [gfs2]
Sep 16 15:46:28 nebula3 kernel: [442791.538327]  [<ffffffffa05c8a2c>] gfs2_write_begin+0x20c/0x470 [gfs2]
Sep 16 15:46:28 nebula3 kernel: [442791.544070]  [<ffffffff8114f888>] generic_file_buffered_write+0xf8/0x250
Sep 16 15:46:28 nebula3 kernel: [442791.549757]  [<ffffffff81150f51>] __generic_file_aio_write+0x1c1/0x3d0
Sep 16 15:46:28 nebula3 kernel: [442791.555366]  [<ffffffff811511b8>] generic_file_aio_write+0x58/0xa0
Sep 16 15:46:28 nebula3 kernel: [442791.560909]  [<ffffffffa05ca3d9>] gfs2_file_aio_write+0xb9/0x150 [gfs2]
Sep 16 15:46:28 nebula3 kernel: [442791.566473]  [<ffffffff8108e720>] ? hrtimer_get_res+0x50/0x50
Sep 16 15:46:28 nebula3 kernel: [442791.571918]  [<ffffffff811bdc9a>] do_sync_write+0x5a/0x90
Sep 16 15:46:28 nebula3 kernel: [442791.577235]  [<ffffffff811be424>] vfs_write+0xb4/0x1f0
Sep 16 15:46:28 nebula3 kernel: [442791.582459]  [<ffffffff811befd2>] SyS_pwrite64+0x72/0xb0
Sep 16 15:46:28 nebula3 kernel: [442791.587547]  [<ffffffff8173489d>] system_call_fastpath+0x1a/0x1f
Sep 16 15:46:28 nebula3 kernel: [442791.592576] Code: 34 0f 8d 1d 03 00 00 48 8b 0b 8b 53 0c 48 63 c2 48 8d 34 80 48 8b 41 58 4c 8d 3c f0 49 8b 47 10 a8 02 75 ab 49 8b 17 41 8b 47 18 <48> 03 42 28 48 8b 12 83 e2 01 0f 84 50 04 00 00 80 7c 24 33 02
Sep 16 15:46:28 nebula3 kernel: [442791.607762] RIP  [<ffffffffa05d613c>] gfs2_rbm_find+0xac/0x530 [gfs2]
Sep 16 15:46:28 nebula3 kernel: [442791.612680]  RSP <ffff8801303cfae8>
Sep 16 15:46:28 nebula3 kernel: [442791.617436] CR2: 0000000000000028
Sep 16 15:46:28 nebula3 kernel: [442791.628730] ---[ end trace 0f6f4a48b58f5fb0 ]---


After a reboot of the hardware and starting the pacemaker stack, it's running.

I just loose some VMs in transient states.

Regards.

-- 
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6  2AAD CC1E 9E5B 7A6F E2DF

Attachment: signature.asc
Description: PGP signature

-- 
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux