Re: rhcs + gfs performance issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Let me say first, I appreciate your help tremendously.  Let me answer
some questions, and then I need to go do some homework you have
suggested.


> In your cluster.conf, make sure in the
> 
> <cluternode name="node1c"....
> 
> section is pointing at a private crossover IP of the node. Say you have 
> 2nd dedicated Gb interface for the clustering, assign it address, say 
> 10.0.0.1, and in the hosts file, have something like
> 
> 10.0.0.1 node1c
> 10.0.0.2 node2c
> 
> That way each node in the cluster is referred to by it's cluster 
> interface name, and thus the cluster communication will go over that 
> dedicated interface.
> 
I'm not sure I understand this correctly, please bear with me, are you
saying the communication runs over the fenced interface?  Or that the
node name should reference a seperate nic that is private, and the
exported virtual ip to the clients is done over the public interface?
I'm confused, I thought that definition had to be the same as the
hostname of the box?  Here is what is in my conf file for reference:

 <clusternode name="engrfs1.seas.smu.edu" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device modulename=""
name="engrfs1drac"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="engrfs2.seas.smu.edu" nodeid="2"
votes="1">
                        <fence>
                                <method name="1">
                                        <device modulename=""
name="engrfs2drac"/>
                                </method>
                        </fence>

Where as engrfs1 and 2 are the actual hostnames of the boxes.
 



> The fail-over resources (typically client-side IPs) remain as they are 
> on the client-side subnet.

> It sounds like you are seeing write contention. Make sure you mount 
> everything with noatime,nodiratime,noquota, both from the GFS and from 
> the NFS clients' side. Otherwise ever read will also require a write, 
> and that'll kill any hope of getting decent performance out of the system.
Already mounted noatime, will add nodiratime.  Can't do noquota, we
implement quotas for ever users here (5000 or so), and did so on the old
file server.
> 

> 
> I'm guessing the old server was standalone, rather than clustered?
No, clustered, as I assume you realized below, just making sure it's
clear.
> 

> 
> I see, so you had two servers in a load-sharing write-write 
> configuration before, too?
Certainly were capable of such.  However here, as we did there, we set
it up in more of a failover mode.  We export a virtual ip attached to
the nfs export, and all clients mount the vip, so whichever machine has
the vip at a given time is "master" and gets all the traffic.  The only
exception to this is the backups that run at night, we do on the
"secondary" machine directly, rather than using the vip.  And the
secondary is only there in the event of a failure to node1, when node1
comes back online, it is set up to fail back to node1.
> 


> If you set the nodes up in a fail-over configuration, and server all the 
> traffic from the primary node, you may see the performance improve due 
> to locks not being bounced around all the time, they'll get set on the 
> master node and stay there until the master node fails and it's floating 
> IP gets migrated to the other node.
As explained above, exactly how it is set up.  Old file server the same
way.  We're basically completely scratching our heads in disbelief here
to a large degree.  No if/ands/buts about it, hardware wise, we have
500% more box than we used to have.  Configuration architecture is
virtually identical.  Which leaves us with the software, which leaves us
with only 2 conclusions we can come up with:

1)  Tru64 and TruCluster with Advfs from 7 years ago is simply that much
more robust and mature than RHES4 and CS/GFS and therefore tremendously
outperforms it...or
2)  We have this badly configured.
> 
> Gordan
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux