Re: When corosync-1.4.1-3.el6 will be released for rhel6.x?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/26/2011 01:34 PM, Jan Friesse wrote:
Please take your time to read how RHEL release process works, but
basically and shortly. Ya, it's called EUS (Z-stream), and primary
purpose is for really hard/security bugs. To be honest, 709758 may be
annoying bug, but it doesn't fit to Z-stream very well, especially
because it can be seen only in very special conditions/broken environments.

But problem described in 709758 appears in my enviroment: One RHEL6.1 kvm host with two, only two with single CPUs, rhel6.1 guests running RHCS ...

See this:

a) running top on a rhel6.1 guest:

top - 13:50:02 up  4:25,  4 users,  load average: 5.91, 5.99, 6.71
Tasks: 132 total,   5 running, 127 sleeping,   0 stopped,   0 zombie
Cpu(s): 96.7%us, 3.3%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem:   1289092k total,   259524k used,  1029568k free,    24692k buffers
Swap:  1309688k total,        0k used,  1309688k free,   110376k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND

1260 root RT 0 88572 84m 57m R 94.3 6.7 132:46.40 corosync

10475 root 19 -1 18704 1468 732 R 2.3 0.1 2:01.54 clulog

10454 root 19 -1 18704 1512 764 R 2.0 0.1 2:01.93 clulog

10654 root 20 0 5352 1688 1244 S 0.3 0.1 0:06.76 rgmanager

11681 root      20   0  2672 1132  864 S  0.3  0.1   0:03.43 top

b) trying to stop rgmanager under rhel6.1 kvm guest, never stops:

[root@rhelclunode01 tmp]# time service rgmanager stop
Stopping Cluster Service Manager:

c) running top under rhel6.1 kvm host:

top - 13:52:00 up  4:32,  1 user,  load average: 1.00, 1.00, 0.93
Tasks: 143 total,   1 running, 142 sleeping,   0 stopped,   0 zombie
Cpu(s): 26.4%us, 1.5%sy, 0.0%ni, 72.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem:   5088504k total,  3656212k used,  1432292k free,    57832k buffers
Swap:  5242872k total,        0k used,  5242872k free,  1240980k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND

2659 qemu 20 0 1526m 1.2g 3880 S 100.1 25.3 182:17.81 qemu-kvm

2445 qemu 20 0 1350m 592m 3960 S 6.0 11.9 13:55.74 qemu-kvm

2203 root 20 0 683m 15m 4904 S 3.0 0.3 7:56.55 libvirtd

2524 root 20 0 0 0 0 S 1.0 0.0 1:01.55 kvm-pit-wq

 2279 qemu      20   0  852m 534m 3900 S  0.7 10.8   1:31.42 qemu-kvm

d) ps ax |grep qemu-kvm, under rhel6.1 kvm host:

2659 ? Sl 183:01 /usr/libexec/qemu-kvm -S -M rhel6.1.0 -cpu qemu32 -enable-kvm -m 1280 -smp 1,sockets=1,cores=1,threads=1 -name rhelclunode01 -uuid 5f0c1503-34a0-771b-1cde-bbe257447590 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/rhelclunode01.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -netdev tap,fd=21,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:50:56:17:ad:8f,bus=pci.0,addr=0x3,bootindex=1 -netdev tap,fd=26,id=hostnet1,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:50:56:36:59:a7,bus=pci.0,addr=0x4 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 127.0.0.1:2 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

Then, what could be the solution if not fix will be released until rhel6.2?? disable all rhcs services and don't install RHCS netither on virtual or physical enviroments??

 Thanks.

--
CL Martinez
carlopmart {at} gmail {d0t} com

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux