On Tue, Apr 14, 2009 at 2:39 PM, vu pham <vu@xxxxxxxxxx> wrote:
Glad that you found the problem. Just curious that your log didn't show any clear clue. Below is my log in a similar situation. It says pretty clear that there is something wrong in my mount parameters.
Spencer Parker wrote:
I found my problem. It was the trailing slash on /mnt/mysql
Jan 16 15:33:22 xen2vm1 clurgmgrd[1790]: <notice> Service service:nfs started
Jan 16 15:33:26 xen2vm1 clurgmgrd[1790]: <notice> status on netfs "nfsdata" returned 1 (generic error)
Jan 16 15:33:26 xen2vm1 clurgmgrd[1790]: <notice> Stopping service service:nfs
Jan 16 15:33:26 xen2vm1 clurgmgrd[1790]: <notice> Service service:nfs is recovering
Jan 16 15:33:27 xen2vm1 clurgmgrd[1790]: <notice> Service service:nfs is now running on member 1
Jan 16 15:33:36 xen2vm1 clurgmgrd[1790]: <notice> Recovering failed service service:nfs
Jan 16 15:33:37 xen2vm1 clurgmgrd: [1790]: <err> 'mount -o sync,soft,noac 172.16.254.14:/data /mnt/nfsdata/' failed, error=32
Jan 16 15:33:37 xen2vm1 clurgmgrd[1790]: <notice> start on netfs "nfsdata" returned 2 (invalid argument(s))
Jan 16 15:33:37 xen2vm1 clurgmgrd[1790]: <warning> #68: Failed to start service:nfs; return value: 1
Jan 16 15:33:37 xen2vm1 clurgmgrd[1790]: <notice> Stopping service service:nfs
Jan 16 15:33:37 xen2vm1 clurgmgrd[1790]: <notice> Service service:nfs is recovering
Btw, I use RHEL 5.2, just plain 5.2 from the DVD without any RHN update.
Vu
On Tue, Apr 14, 2009 at 12:41 PM, Spencer Parker <sjpark@xxxxxxxxxxxxxxxxxxxx <mailto:sjpark@xxxxxxxxxxxxxxxxxxxx>> wrote:
<?xml version="1.0"?>
<cluster alias="cluster" config_version="43" name="cluster">
<fence_daemon clean_start="0" post_fail_delay="0"
post_join_delay="3"/>
<clusternodes>
<clusternode name="shadowhawk" nodeid="1" votes="1">
<fence>
<method name="1">
<device name="shadowhawk-ilo"/>
</method>
</fence>
</clusternode>
<clusternode name="darkhawk" nodeid="2" votes="1">
<fence>
<method name="1">
<device name="darkhawk-ilo"/>
</method>
</fence>
</clusternode>
</clusternodes>
<cman expected_votes="1" two_node="1"/>
<fencedevices>
<fencedevice agent="fence_ilo"
hostname="darkhawk-ilo" login="cluster" name="darkhawk-ilo"
passwd="*******"/>
<fencedevice agent="fence_ilo"
hostname="shadowhawk-ilo" login="cluster" name="shadowhawk-ilo"
passwd="*******"/>
</fencedevices>
<rm log_level="7">
<failoverdomains>
<failoverdomain name="failover"
nofailback="0" ordered="1" restricted="0">
<failoverdomainnode
name="shadowhawk" priority="1"/>
<failoverdomainnode name="darkhawk"
priority="2"/>
</failoverdomain>
</failoverdomains>
<resources>
<ip address="10.10.200.25" monitor_link="1"/>
<script file="/etc/init.d/mysqld"
name="mysqld"/>
<script file="/etc/init.d/httpd" name="httpd"/>
<netfs export="/vol/test_mysql/mysql"
exportpath="/vol/test_mysql/mysql" force_unmount="1" fstype="nfs"
host="netapp" mountpoint="/mnt/mysql/" name="mysql_data"
nfstype="nfs"
options="defaults,rw,async,nfsvers=3,mountvers=3,proto=tcp"/>
</resources>
<service autostart="1" domain="failover"
exclusive="0" name="cluster" recovery="relocate">
<ip ref="10.10.200.25">
<script ref="mysqld"/>
<script ref="httpd"/>
</ip>
</service>
<service autostart="1" domain="failover"
exclusive="0" name="nfs" recovery="relocate">
<netfs ref="mysql_data"/>
</service>
</rm>
</cluster>
On Tue, Apr 14, 2009 at 12:30 PM, vu pham <vu@xxxxxxxxxx<mailto:vu@xxxxxxxxxx>> wrote:<mailto:vu@xxxxxxxxxx> <mailto:vu@xxxxxxxxxx
Spencer Parker wrote:
The NFS share is located on a NetApp box not running GFS.
The NFS share is only there to share the database
information for the MySQL resource. The failure comes when
it goes to check the status of the NFS mount.
Jan 9 13:50:40 shadowhawk clurgmgrd[4212]: <notice> status
on netfs "mysql_data" returned 1 (generic error)
That is the error coming out of my log file. It mounts the
NFS share just fine...and leaves it mounted as well. When
it checks the status it then errors out.
What is your cluster.conf ?
On Tue, Apr 14, 2009 at 12:23 PM, vu pham <vu@xxxxxxxxxx
<mailto:vu@xxxxxxxxxx>>> wrote:
Spencer Parker wrote:
I am running a MySQL cluster using cluster services
and I have
one issue when it comes to NFS. The MySQl services
run fine
until add in an NFS mount. The NFS mount is where
all of the
MySQl databases live at. I can get the NFS share to
mount
properly on the cluster machines, but the log files
keep telling
it errors out. Once it errors out the service then
stops. I
have tried restarting the service, but that has it
remounting
the share over the top of the old one. It never
unmounts the
NFS share upon failure. I can manually mount it and
it works
fine...I can read-write to it just fine...I have added in
options and taken them away. All of this results in
the same
end. Any ideas?
Spencer,
How do you share the NFS ? Do you use GSF ? What are the
error
messages ? Is the storage-device iscsi ?
Vu
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
<mailto:Linux-cluster@xxxxxxxxxx>
<mailto:Linux-cluster@xxxxxxxxxx
<mailto:Linux-cluster@xxxxxxxxxx>>
https://www.redhat.com/mailman/listinfo/linux-cluster
------------------------------------------------------------------------
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx <mailto:Linux-cluster@xxxxxxxxxx>
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx <mailto:Linux-cluster@xxxxxxxxxx>
https://www.redhat.com/mailman/listinfo/linux-cluster
------------------------------------------------------------------------
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster