Hi Folks,
I'm wondering how ceph would work in a small cluster that supports a mix
of engineering and modest production (email, lists, web server for
several small communities).
Specifically, we have a rack with 4 medium-horsepower servers, each with
4 disk drives, running Xen (debian dom0 and domUs) - all linked together
w/ 4 gigE ethernets.
Currently, 2 of the servers are running a high-availability
configuration, using DRBD to mirror specific volumes, and pacemaker for
failover.
For a while, I've been looking for a way to replace DRBD with something
that would mirror across more than 2 servers - so that we could migrate
VMs arbitrarily - and that will work without splitting up compute vs.
storage nodes (for the short term, at least, we're stuck with rack space
and server limitations).
The thing that looks closest to filling the bill is Sheepdog (at least
architecturally) - but it only provides a KVM interface. GlusterFS,
xTreemFS, and Ceph keep coming up as possibles - with ceph's rbd
interface looking like the easiest to integrate.
Which leads me to two questions:
- On a theoretical level, does using ceph as a storage pool for this
kind of small cluster make any sense (notably, I'd see running an OSD, a
MDS, a MON, and client DomUs on each of the 4 nodes, using LVM to pool
all the storage and it seems like folks recommend XFS as a production
filesystem)
- On a practical level, has anybody tried building this kind of small
cluster, and if so, what kind of results have you had?
Comments and suggestions please!
Thank you very much,
Miles Fidelman
--
In theory, there is no difference between theory and practice.
In practice, there is. .... Yogi Berra
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html