Hi, I have a small cluster that I use to host a collection of Xen virtual machines. I just expanded from 2 nodes to 4 nodes and am looking for some advice re. configuring a storage subsystem. The current (2-node) configuration is simple: - 4 disks per node - md-based raid - lvm - DRBD to replicate (some) volumes between the two nodes - (some) VMs set up for auto-failover on node failure (pacemaker, etc.) In moving to 4 nodes, I'd like to add some flexibility to move VMs across all 4 nodes, but... that requires using something other than DRBD to replicate volumes. I'm thinking of something with the following characteristics: - 4-node storage cluster (4 drives per node, total of 16 drives in the storage pool) - 4-node VM cluster - using the SAME 4 nodes for both - note: I've got 4 gigE ports to play with on each box (plan on using 2 for outside access, 2 for storage/heartbeat networking) GlusterFS stands out as the package that seems most capable of supporting this (if we were using KVM, I'd probably look at Sheepdog as well). So... a few questions: - it looks like running replicated volumes, across 4 nodes, will provide for redundancy and support migration/failover (am I right in this? or should I be looking at running RAID on the individual nodes as well?) - what kind of performance hit is involved in replicated volumes? - is there anything more efficient in disk use (i.e., mirroring 4 copies eats up lots of disk, is there anything equivalent to RAID 5/6 that is a little more efficient while maintaining redundancy?) - am I missing anything (either re. GlusterFS or other alternatives) Thanks very much for any suggestions and advice. Miles Fidelman -- In theory, there is no difference between theory and practice. In<fnord> practice, there is. .... Yogi Berra