RE: Re: RedHat SSI cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Aneesh,

The latest GFS infrastructure requires total ordering of messages to
work properly and is highly integrated into openais at the moment.

It is possible to modify the totem protocol in openais to use TIPC or
some other transport besides UDP but multicast (or broadcast) is a
requirement of the underlying protocol.

I do not know of other protocols that are available with a suitable
license that provide total ordering of messages (often called agreed
ordering).

An example of what is required from the transport is as follows:
Agreed ordering always requires the following:
ex: 3 nodes transmitting messages a b c
N1: C A B delivered
N2: C A B delivered
N3: C A B delivered

whereas without total order something like this could happen:
n1: C A B delivered
N2: A B C delivered
N3: C B A delivered

This second scenario is disallowed by agreed ordering and won't work
with the GFS infrastructure.  The protocol in openais (Totem Single Ring
Protocol) provides agreed and virtual synchrony ordering.

Regards
-steve


On Fri, 2007-02-16 at 11:25 -0800, Lin Shen (lshen) wrote:
> Hi Aneesh,
> 
> We're planning to make GFS/GNBD to work on top of TIPC in hope that will
> give better performance over TCP.  Since TIPC provides socket-like APIs,
> our initial thinking was to just convert socket APIs in GFS/GNBD code to
> TIPC socket-like APIs. Based on what you described, making GFS/GNBD to
> work on top of ICS may be a better alternative. 
> 
> Could you give me some pointers on how to make GFS/GNBD to work on ICS? 
> 
> Lin   
> 
> > -----Original Message-----
> > From: linux-cluster-bounces@xxxxxxxxxx 
> > [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Aneesh 
> > Kumar K.V
> > Sent: Monday, January 01, 2007 7:57 AM
> > To: linux-cluster@xxxxxxxxxx
> > Subject:  Re: RedHat SSI cluster
> > 
> > Bob Marcan wrote:
> > > Hi.
> > > Are there any plans to enhance RHCS to become full SSI 
> > (Single System 
> > > Image) cluster?
> > > Will http://www.open-sharedroot.org/ become officialy included and 
> > > supported?
> > > Isn't time to unite force with the http://www.openssi.org ?
> > > 
> > 
> > 
> > If you look at openssi.org code you can consider it contain 
> > multiple components
> > 
> > a) ICS
> > b) VPROC
> > c) CFS
> > d) Clusterwide SYSVIPC
> > e) Clusterwide PID
> > f) Clusterwide remote file operations
> > 
> > 
> > I am right now done with getting ICS cleaned up for 
> > 2.6.20-rc1 kernel. It provides a transport independent 
> > cluster framework for writing kernel cluster services. 
> > You can find the code at 
> > http://git.openssi.org/~kvaneesh/gitweb.cgi?p=ci-to-linus.git;
> > a=summary
> > 
> > 
> > So what could be done which will help GFS and OCFS2 is to 
> > make sure they can work on top of ICS. That also bring in an 
> > advantage that GFS and OCFS2 can work using 
> > TCP/Infiniband/SCTP/TIPC what ever the transport layer 
> > protocol is. Once that is done next step would be to get 
> > Clusterwide SYSVIPC from OpenSSI and merge it with latest 
> > kernel. ClusterWide PID and clusterwide remote file operation 
> > is easy to get working. What is most difficult is VPROC which 
> > bring in the clusterwide proc model. Bruce Walker have a 
> > paper written on a generic framework at 
> > http://www.openssi.org/cgi-bin/view?page=proc-hooks.html
> > 
> > 
> > -aneesh
> > 
> > 
> > -aneesh 
> > 
> > --
> > Linux-cluster mailing list
> > Linux-cluster@xxxxxxxxxx
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> > 
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux