The problem here is that the RIBCL interface insists on a new connection each time you contact it - so there is alot of buildup/teardown time. We tried to help this agent's speed by adding a 'force="1"'attribute in the cluster.conf fencedevice section for it. This attribute just kills it as fast as possible - and does not do the initial status check. Fencing was done in less than 7 seconds using this attribute. I'm pretty sure this is doc'd on the schema page and should be in the man page for fence_ilo as well. Hope this helps... -J On Thu, 2008-08-07 at 09:57 +0200, Marc Grimme wrote: > You might want to take a look at this ILO Fence Agent. > http://download.atix.de/yum/comoonics/redhat-el5/productive/noarch/RPMS/comoonics-bootimage-fenceclient-ilo-0.1-18.noarch.rpm > I think when I wrote it I detected the same problem and fixed it there. > Marc. > On Thursday 07 August 2008 09:33:46 Jakub Suchy wrote: > > > Does not sound like you are having a fencing issue, but I can share our > > > configuration / implementation and experiences with it. > > > > > > We have been using fencing configured for HP iLO and iLO2 for the better > > > of 2-years, with almost a full year in production now. It is slow (42+ > > > seconds per fencing attempt) and always problematic. We are piloting > > > > Hi, > > I am currently implementing cluster using HP iLO and i am experiencing > > this slowness too. As far as I have dug into the fence_ilo perl script, it > > seems that longest time it takes is for opening a SSL socket to the card. > > Also, the script for reboot works like this (pseudocode): > > > > if ($action == reboot) { > > check_status(); > > if (status == on) { > > power_off(); > > check_status(); // if error... > > } > > } > > > > this means 3 operations = 3 sockets = lots of time. > > If the script could be rewritten to reuse existing socket, it will be a > > lot faster. I just don't know how to determine if the socket is still > > alive (then we need to reconnect). Anyone? > > > > (Also, fence_ilo depends on perl-Crypt-SSLeay, but this is not marked as a > > dependency in relevant channel, so you have to install it manually - i > > should post a bug report to bugzilla). > > > > Jakub > > > > -- > > Linux-cluster mailing list > > Linux-cluster@xxxxxxxxxx > > https://www.redhat.com/mailman/listinfo/linux-cluster > > > -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster