RE: newbie: gfs merge

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



wolfgang pauli wrote:
> > > I installed gfs and all the cluster stuff on our systems and I
> > > didn't have the impression that I missed any of the steps in the
> > > manual. So I have to nodes which both have a gfs partition
> > > mounted. I can also mount these, if I exported them with gnbd.
> > > But I don't see the big difference to nfs yet (apart from maybe
> > > performance). I thought that if I name the gfs-partitions the
> > > same (clustername:gfs1) they would be magically merged or
> > > something like that. I thought this was meant by the notion in
> > > the docs that GFS does not have a single point of failure. Or
> > > that we could have redundant file-servers. What did I get wrong
> > > about all that? 
> > 
> > It sounds like you are a bit confused about what GFS does.  I
> > replied to someone within the last week or so on almost the same
> > issue.  Check the archives. 
> > 
> > GFS is a filesystem that allows multiple nodes to access and
> > update it at the same time.  The cluster services manage the nodes
> > and try to prevent a misbehaving node from corrupting the
> > filesystem.
> > 
> > If you have hard drives in all of your nodes, GFS and the cluster
> > will not help you make them into one big shared storage area -- at
> > least not yet, I believe there is a beta (alpha?) project out
> > there somewhere. If you have a big storage area, GFS and the
> > cluster _will_ allow you to connect all of your nodes to it.
> > 
> > The redundancy comes in the fact that you have multiple machines
> > running from the same storage area.  If one of the machines goes
> > down, the others can continue working.  In a load-balanced
> > configuration, the loss of one of the nodes will be transparent to
> > the users.  In theory, of course...  If the storage dies, that's
> > another issue. Hopefully, your storage is raid and can handle a
> > disk failure. 
> 
> Hm... Thanks for you answer! I am definetelly confused a bit. Even
> after reading you post of last week. I understand that i can not
> merge the file systems. Our setup is very basic. We have to linux
> machines who could act as file server and we thought that we could
> one (A) have working as an active backup of the other (B). Is that
> what the documentation calls a failover domain, with (B) being the
> failover "domain" for (A)? Until now, we were running rsync at
> night, so that if the first of the two servers failed, clients could
> mount the NFS from the other server. There is nothing fancy here,
> like a SAN I guess, just machines connected via ethernet switches.
> So basically the question is, whether it is possible to keep the
> filesystems on the two servers in total sync, so that it would not
> matter whether clients mount the remote share from (A) or (B).
> Whether the clients would automatically be able to mount the GFS
> from (B), if (A) fails. 

No, GFS doesn't work quite like that.  What you have is something more
like this:  Two machines, (A) and (B), are file servers.  A third
machine, (C), is either a linux box exporting it's filesystem via
GNBD, or a dedicated storage box running iSCSI, AoE, or something
similar that will allow multiple connections.  (A) and (B) are both
connected to the GFS filesystem exported by (C).  If either (A) or (B)
goes down, the other one can continue serving the data from (C).  They
don't need to be synchronized because they are using the same physical
storage.  And, if the application permits, you can even run them both
simultaneously.

You are looking for something different.  There is a project out there
for that, but it is not production ready at this point.  Maybe someone
else remembers the name.

-- 
Bowie

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux