Re: GFS2/OCFS2 scalability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Kirill Kuvaldin wrote:
What are the practical/theoretical limits for number of nodes for
shared disk file systems like ocfs2/gfs2?

Theoretical limit is around 254 or so. Practical limit depends on
the hardware. Meaning, you cannot just add nodes. You have to
ensure the interconnect and the storage can handle the nodes. Also,
the more nodes you have, the more cpu/ram each node will have to dedicate
to the clustering overhead. Meaning, at some cluster size, dual core
nodes may not give you the best bang.

So with a GigE interconnect, 2/4G Fiber storage and dual core nodes,
32 to 64 nodes _may_ be the upper limit to the cluster size. A lot
depends on the workload... meaning hard numbers are not possible.

As far as clusters in use go, I have heard of 32 node ocfs2 clusters. Few.
More common is <= 20. 16 is quite common. mkfs.ocfs2 now defaults to 8...
meaning 8 is the very common.

I did testing in a limited setup - a cluster of 4 xen domU of 128MB
RAM with a shared block device mapped to a local LVM volume. My
results could be wrong anyway ;)

You cannot really prototype a 100-1000 node cluster with 4 128MB
xen domUs.

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux