Re: can't start GFS on Fedora

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



yes, I'm trying to get a test HPC cluster going with GFS  to be used as a SAN and shared among several nodes. 

I'm currently using Fedora Core 4 which was the first version to come with GFS.  As you say that there is now
a "new" infrast.  would you recommend that I simply upgrade to Fedora Core 6?

In terms of CCSD, 'service ccsd start' simply returns [Failed].  the logs show ..

Mar  2 11:28:09 IQCD1 ccsd[8651]: Starting ccsd 1.0.0:
Mar  2 11:28:09 IQCD1 ccsd[8651]:  Built: Jun 16 2005 10:45:39
Mar  2 11:28:09 IQCD1 ccsd[8651]:  Copyright (C) Red Hat, Inc.  2004  All rights reserved.

That's it.  I've now discovered that cluster.conf is nowhere to be found on my system.  The probably
explains CCSD failing.  ccs-1.0 is installed.  What package installed a default cluster.conf file?

Thanks.



On 3/2/07, Robert Peterson <rpeterso@xxxxxxxxxx> wrote:
Jose Guevarra wrote:
> I have a volume group that I want to mount w/ GFS
> /dev/mapper/VolGroup00-LogVol02
>
> I was able to create a GFS file system w/ this command...
>
> # gfs_mkfs -p lock_dlm -t CLUST:gfs1 -j 6 /dev/mapper/VolGroup00-LogVol02
>
> Now. when I try to start ccsd it fails. so none of the other daemons
> start
> either. /var/log/messages doesn't say anything about the start failure.
>
> How can I troubleshoot this more? What are the required daemons that
> need to
> start?
Hi Jose,

I have a couple of suggestions.  First of all, you need to determine
if you're planning to use GFS in a cluster (i.e. on shared storage like
a SAN) or
stand-alone (and share that with us if you want help.)

Your use of  "lock_dlm" and a cluster name makes it sound like you want
it in a cluster, but the VolGroup00-LogVol02 makes it sound like a local
hard disk and not any kind of shared storage.

If you're using it stand-alone, you don't need ccsd, since ccsd is part
of the
cluster infrastructure.  If stand-alone, you also would want to use
lock_nolock
rather than lock_dlm.

Now about ccsd: If you're using the cluster code shipped with FC6,
that's the "new" infrastructure code.  With the "new" stuff, you don't
need to start ccsd with a separate script like in RHEL4.  Everything should
be handled by doing: "service cman start."  The ccsd daemon is started
by the init script.  I apologize if you already knew this.  It's just that
I couldn't tell how you were starting ccsd.

You say that ccsd fails, but you didn't say much about how it fails or
what error message it gives you.

I guess the bottom line is that you didn't give us enough information to
help you.

Also, if this storage is shared in the cluster, you need to do
"service clvmd start" as well, and you may want to
change locking_type = 3 in your /etc/lvm/lvm.conf before starting clvmd.

If you're using it on shared storage in a cluster, you should probably
post your cluster.conf file, which might tell us why ccsd is having issues.

Also, the gfs_mkfs command is typically used on the logical volume, not on
the /dev/mapper device.  So something like:

# gfs_mkfs -p lock_dlm -t CLUST:gfs1 -j 6 /dev/VolGroup00/LogVol02

I hope this helps.

Regards,

Bob Peterson
Red Hat Cluster Suite

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux