Hi and pardon me for late reply, On 09/16/2011 02:54 PM, BONNETOT Jean-Daniel (EXT THALES) wrote:
Hello, Usually I used manal installation but I need to process throu Luci. My problem is present with RHEL 5.7 and RHEL 6.0 (luci and ricci), with RHEL 5.6 it works correctly. I used "Create" new cluster and add my nodes (options arenot important, the problem is always here) and submit… "Please wait..." Creating node "node1" for cluster "clutest": installing packages Creating node "node2" for cluster "clutest": installing packages I waited ;) but nothing. My process list on nodes says : 4166 ? Ss 0:00 /usr/sbin/oddjobd -p /var/run/oddjobd.pid -t 300 22343 ? S 0:00 \_ ricci-modrpm 22355 ? S 0:01 \_ /usr/bin/python /usr/bin/yum -y list all 4221 ? S<s 0:09 ricci -u 236 22342 ? S<s 0:00 /usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1952735127 Nothing append, yum -y list all stay blocked… this command works well manually. I found some people with same problem on centos lists but no answers :( Do you know what can trouble ricci ? Have someone already same problem ?
This is a known issue (I contributed to the fix) that should fixed in upcoming ricci/conga RHEL 5.7 (and also 5.6, see below) package update and the fix should be present in RHEL 5.8 as well. I haven't tried RHEL 5.6 version regarding this issue but with 5.7, I was reproducing it reliably. The same part identified as problematic with ricci is present also in RHEL 5.6 package(s), so it's interesting you haven't run into this with 5.6 in the same scenario as with 5.7. Can you please provide which package version have you been using there and which platform it was? Regarding RHEL 6.x, I could reproduce the problem only extremely rarely (i.e., once in tens of tries of two-nodes cluster creation, and it was only a single node exposing that buggy behavior, the other was fine). Therefore, I would be glad if you could provide details of your reproducer (platform, ricci package version, the rate of successful reproducing the issue, whether it is in "create cluster" scenario only and maybe other circumstances such as if SELinux is used in enforcing mode). Thanks for your feedback (either on the list or via PM), will be appreciated, Jan -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster