Hi, (Sorry for the top posting, i blame my email client!) The service is started yes. I was going for the cluster.conf way, IE, setting the status check to 1 minute. I can see it should not be a problem with a script resource but since i use the Oracle Resource Agent it kinda runs around in my head. Here is what i tried: <service autostart="1" exclusive="0" name="oracle1" recovery="relocate"> <ip __independent_subtree="1" ref="10.x.x.x"> <fs ref="ora1-data"/> <fs ref="ora1-archlogs"/> <oracledb home="/u01/app/oracle" name="oracle1" type="10g" user="oracle"/> <action name="status" depth="*" interval="1m"/> </ip> </service> However this does not seem to work, but i am pretty sure it is just because the oracle agent is not configured as a "resource" ....Am i right? Thanks, Finnur -----Original Message----- From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Lon Hohberger Sent: 24. júní 2008 17:04 To: linux clustering Subject: Re: Oracle 10G resource agent - status polling On Tue, 2008-06-24 at 15:40 +0000, Finnur Örn Guðmundsson - TM Software wrote: > Hi, > I‘m in the middle of configuring a HA cluster for Oracle, and > everything is working as planned....failover etc. However there is one > thing does bug me a bit, and that is: > > > > If i start the database with the cluster software (clusvcadm –e > oracle10), let it start and log into the database and run shutdown > abort (or just kill it ...whatever) the cluster software does not seem > to notice this until after around 5 minutes. (1) Is this with it fully started? The 'status' check will wait until the 'start' is complete - this can take several minutes. (2) [likely the problem] The default check interval in the oracledb.sh resource agent is 5 minutes. That's probably a bit long, even for a heavily loaded Oracle instance. You have two options - - edit /usr/share/cluster/oracledb.sh and change the 'status' and 'monitor' action intervals (well, status only; we don't use monitor). - add a special tag below the resource agent in cluster.conf: <action name="status" depth="*" interval="1m"/> (This overrides the policies in /usr/share/cluster/* on a per-instance basis) Note that if you set it too fast such that the previous status check hasn't completed by the time the new status check is supposed to occur, the new status check will get thrown away. -- Lon -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster