On Thu, 9 Jun 2005, Lon Hohberger wrote:
Though, I kind of wonder why having one node online (+ the three lock
servers) and able to access the file system is such a strict
requirement. In a small cluster of 5-20 nodes with 3 lock servers, the
overhead from addition of lock servers is quite high:
- 5 nodes: 60%(!) more nodes (totaling 8 machines) once lock servers are
added
- 15 nodes: 20% more nodes (totaling 18 machines) once lock servers are
added
- 20 nodes: 15% ...
It's a lot of overhead from a hardware perspective, especially once you
include the fact that both lock servers and clients need fencing. Gulm
is typically used on much larger clusters. If you mount the GFS volumes
on the lock servers, your requirement for "1 node online" will be
broken, so the overhead can't be avoided. Furthermore, if you intend to
do this, you'll find that your availability will be more predictable
when using DLM.
My case is a little different - we're not talking physical hardware here,
it's virtual machines.
I've got 4 physical boxes, and will be running ~15-20 virtual machines on
these physical boxes. I'd like to be able to tolerate any of the virtual
machines going down, and still have the remaining VM's able to get to GFS.
(Many of the VM's are just development toys, so will be rebooted / halted
/ started on a regular basis.)
From what you've said, it sounds like for this scenario lock_gulm makes
more sense -- am I missing something?
In case there was any confusion, "mounting the file system" is not the
same thing as "joining the cluster". Nodes can join the cluster and
_not_ mount any file systems - or do anything else cluster-related, for
that matter. Any quorate member of the cluster may mount one of the
file systems on the cluster.
Yeah, got'cha.
Thanks much for the detailed reply!
------------------------------------------------------------------------
| nate carlson | natecars@xxxxxxxxxxxxxxx | http://www.natecarlson.com |
| depriving some poor village of its idiot since 1981 |
------------------------------------------------------------------------
--
Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster