On point 2, I've been running an older version of Gluster for some months on several pairs of systems which are also serving other functions. In one case they're running multiple VMs (replicated by individual DRBD mirrorings) while also providing replicated Gluster storage used (via NFS) by other systems. I'm sure this would be insane if this all added up to a heavy load in any respect, but in this case these are all lightly-used services, and there have been no complaints. DRBD and Gluster replication share a dedicated cable between the systems, so they're competing, but at least not with the rest of the LAN. Experimenting with VMs stored on older Gluster versions wasn't encouraging, thus the DRBD. I look forward to 3.3's release. As for running VMs on the same servers hosting Gluster (and other) storage - works fine for me. YMMV. It's going to depend on how hard the VMs are working, and how heavy they're hitting the storage. Whit On Tue, Nov 01, 2011 at 08:22:23PM -0500, Gerald Brandt wrote: > Hi, > > I can answer point 1. GlusterFS 3.3 (still in beta), does finer locking during self-heal, which is what the VM images like. > > Gerald > > ----- Original Message ----- > > From: "Miles Fidelman" <mfidelman at meetinghouse.net> > > > > 2. It looks like the standard Gluster configuration separates storage > > bricks from client (compute) nodes. Is it feasible to run virtual > > machines on the same servers that are hosting storage? (I'm working > > with 4 multi-core servers, each with 4 large drives attached - I'm > > not > > really in a position to split things up.)