GlusterFS on a two-node setup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear All,

I have some general questions about whether, in my setup, it makes sense
to use GlusterFS and, if it makes sense, the recommended usage patterns.


Context:
========

We have a two node mini cluster (a Dell PowerEdge C6145). Each node has
four SAS HDs (600 GB each) per node (which I'll most likely use as a
single virtual disk, using the RAID card) and the two nodes are connected
by Infiniband (and could also be connected by 1 GB ethernet, as a fall
back). 

We will be using the cluster for bioinformatics/statistics computing,
including programs that use MPI and OpenMP, as well as giving access (via
web-based applications) to those same bioinfo/stats computing programs.


The main reason for considering GlusterFS is to provide a single
shared-disk file system for homes, application code, result storage,
scratch space, tmp files, etc.


It would also be nice if the cluster could continue working if one of the
nodes (or disk) is down. (Though it is not necessary that GlusterFS
provides this, as I can "implement" it by copying the shared disk backup,
and mounting as a regular, local, file system).



Questions:
==========

1. Is using GlusterFS an overkill? (I guess the alternative would be to use
  NFS from one of the nodes to the other)

2. I plan on using a dedicated partition from each node as a brick. Should
  I use replicated or distributed volumes?

  a) When using replicated volumes, in case one of the nodes fails (so the
  brick becomes unavailable), can the other node continue operating? What
  is the recommended recovery procedure?

  b) Which will give me better read performance?

  c) Which will give me better write performance?


3. I understand things will be much cleaner if I use a dedicated partition
  (in each machine) to be used as a brick, instead of a directory?

4. What other alternatives might I consider? I am thinking about
  PVFS/OrangeFS and FhGFS (and asking similar questions on their
  list). (Lustre definitely seems to discourage client and OSS in same
  node)



Any other comments or suggestions for this setup are welcome.


Best,


R.

-- 
Ramon Diaz-Uriarte
Department of Biochemistry, Lab B-25
Facultad de Medicina 
Universidad Aut?noma de Madrid 
Arzobispo Morcillo, 4
28029 Madrid
Spain

Phone: +34-91-497-2412

Email: rdiaz02 at gmail.com
       ramon.diaz at iib.uam.es

http://ligarto.org/rdiaz



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux