Re: Advice on Storage Hardware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't see why that should be a problem, thats a common solution we recommend to customers even without GFS, doing just ordinary NFS and using heartbeat, active/passive is uncritical.You can even have two NFS mounts of separate partitions and have an active/active that fails over the missing one if one of the two machines goes down. This means you mount one from one IP address and the other from the other, and the IP address gets migrated over.

This can even be done with SCSI attached storage (4TB per enclosure, up to two connected to a 1U server), but of course the fibre attached storage (direct attached as well as a complete san)
is considered more reliable and performant.

Michael Will
Sr. Sales Engineer
www.penguincomputing.com

David Brieck Jr. wrote:
We're planning on building a cluster to handle web operations but
we've run into a dilemma on storage. From all the reading I've done it
appears that the best case would a fibre channel SAN with GFS; however
we'd like to keep the cost down but still retain very reliable
storage.

In our cost cutting efforts I've come up with an idea that I think
would float, but that our hardware vendor (whose trying to sell us the
SAN) doesn't think is possible.

I'd like to replace a SAN device with 2 servers clustered together and
connected to an external SCSI array. Only one of the servers would be
accessing the array, the other would just be a standby in case of a
failover. The accessing server would use GNBD and GFS to export the
block device to all clients which of course would run the clients of
GNBD and GFS.

The documentation seems to imply this is a viable alternative to a SAN
but I'd like to know if anyone is using this type of setup or if there
would be a reason why it wouldn't work properly. I would appreciate
any help since I'm not an expert on this yet. (but should be soon due
to RedHats courses).

All servers in the cluster will have dual ethernet, etc so hopefully
bandwidth shouldn't be a problem.

Thanks in advance.
David Brieck

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


--
Michael Will
Penguin Computing Corp.
Sales Engineer
415-954-2822
415-954-2899 fx
mwill@xxxxxxxxxxxxxxxxxxxx
--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux