fs.sh status weird timings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hellos

I'm doing some tests with SAN/LVM/FS and noticed that the status on the FS
resources aren't much accurate, the interval varies a lot.
I placed a logger command on the fs.sh just before the status workings and
this is what I get on /var/log/messages: ( in the meantime I noticed
there's a new rmanager release on rhel 5.5 and will try it out )

Apr 16 12:23:17 dc2-x6250-a logger: FS Check neto
Apr 16 12:27:47 dc2-x6250-a logger: FS Check neto
Apr 16 12:34:07 dc2-x6250-a logger: FS Check neto
Apr 16 12:44:07 dc2-x6250-a logger: FS Check neto
Apr 16 13:03:07 dc2-x6250-a logger: FS Check neto
Apr 16 13:06:17 dc2-x6250-a logger: FS Check neto
Apr 16 13:20:17 dc2-x6250-a logger: FS Check neto
Apr 16 13:20:47 dc2-x6250-a logger: FS Check neto
Apr 16 13:21:07 dc2-x6250-a logger: FS Check neto
Apr 16 13:21:17 dc2-x6250-a logger: FS Check neto
Apr 16 13:21:37 dc2-x6250-a logger: FS Check neto
Apr 16 13:22:27 dc2-x6250-a logger: FS Check neto
Apr 16 13:22:37 dc2-x6250-a logger: FS Check neto
Apr 16 13:23:07 dc2-x6250-a logger: FS Check neto
Apr 16 13:26:46 dc2-x6250-a logger: FS Check neto
Apr 16 13:28:27 dc2-x6250-a logger: FS Check neto
Apr 16 13:29:00 dc2-x6250-a logger: FS Check neto
Apr 16 13:29:17 dc2-x6250-a logger: FS Check neto
Apr 16 13:29:39 dc2-x6250-a logger: FS Check neto
Apr 16 13:30:16 dc2-x6250-a logger: FS Check neto
Apr 16 13:30:27 dc2-x6250-a logger: FS Check neto
Apr 16 13:30:36 dc2-x6250-a logger: FS Check neto
Apr 16 13:30:37 dc2-x6250-a logger: FS Check neto
Apr 16 13:30:57 dc2-x6250-a logger: FS Check neto
Apr 16 13:31:59 dc2-x6250-a logger: FS Check neto
Apr 16 13:33:00 dc2-x6250-a logger: FS Check neto
Apr 16 13:33:06 dc2-x6250-a logger: FS Check neto
Apr 16 13:33:16 dc2-x6250-a logger: FS Check neto



#################################################
fs.sh

status|monitor)
        logger "FS Check neto"


#################################################
These are the resource/service been used:

<lvm lv_name="" name="jura_resource" vg_name="vg_ora_jura" self_fence="1"/>
<fs device="/dev/vg_ora_jura/lv_ora_jura_arch" force_fsck="0"
force_unmount="1" fsid="2073" fstype="ext3"
mountpoint="/app/oracle/jura/archive" name="ora_jura_arch"
self_fence="1"/>
<fs device="/dev/vg_ora_jura/lv_ora_jura_redo" force_fsck="0"
force_unmount="1" fsid="2074" fstype="ext3"
mountpoint="/app/oracle/jura/redo" name="ora_jura_redo" self_fence="1"/>
<fs device="/dev/vg_ora_jura/lv_ora_jura_data"
force_fsck="0" force_unmount="1" fsid="2075" fstype="ext3"
mountpoint="/app/oracle/jura/data" name="ora_jura_data" self_fence="1"/>
<fs device="/dev/vg_ora_jura/lv_ora_jura_export" force_fsck="0"
force_unmount="1" fsid="2076" fstype="ext3"
mountpoint="/app/oracle/jura/export" name="ora_jura_export"
self_fence="1"/>

<service autostart="1" domain="jura_failover" exclusive="0" name="jura"
recovery="relocate">
     <ip ref="172.26.18.37">
            <lvm ref="jura_resource">
                    <fs ref="ora_jura_arch"/>
                    <fs ref="ora_jura_redo"/>
                    <fs ref="ora_jura_data"/>
                    <fs ref="ora_jura_export"/>
<!-- <oracledb-eib ref="JURA1"/> -->
              </lvm>
        </ip>
 </service>

###########################################################################
Timings from fs.sh script

Agent: fs.sh
Flags: init_on_add destroy_on_delete
Attributes:
  name [ primary ]
  mountpoint [ unique required ]
  device [ unique required ]
  fstype
  force_unmount
  quick_status
  self_fence
  nfslock [ inherit ] default="nfslock"
  fsid
  force_fsck
  options
Actions:
  start
    Timeout (hint): 900 seconds
  stop
    Timeout (hint): 30 seconds
  status
    Timeout (hint): 10 seconds
    Check Interval: 60 seconds
  monitor
    Timeout (hint): 10 seconds
    Check Interval: 60 seconds
  status
    Timeout (hint): 30 seconds
    OCF Check Depth (status/monitor): 10 seconds
    Check Interval: 30 seconds
  monitor
    Timeout (hint): 30 seconds
    OCF Check Depth (status/monitor): 10 seconds
    Check Interval: 30 seconds
  status
    Timeout (hint): 30 seconds
    OCF Check Depth (status/monitor): 20 seconds
    Check Interval: 60 seconds
  monitor
    Timeout (hint): 30 seconds
    OCF Check Depth (status/monitor): 20 seconds
    Check Interval: 60 seconds
  meta-data
    Timeout (hint): 5 seconds
  verify-all
    Timeout (hint): 5 seconds
Explicitly defined child resource types:
  fs [ startlevel = 1 stoplevel = 3 ]
  clusterfs [ startlevel = 1 stoplevel = 3 ]
  nfsexport [ startlevel = 3 stoplevel = 1 ]

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux