On Wed, 27 Jun 2012, Brian Candler wrote: > I've made a test setup like this, but unfortunately I haven't yet been able > to get half-decent performance out of glusterfs 3.3 as a KVM backend. It > may work better if you use local disk for the VM images, and within the VM > mount the glusterfs volume for application data. What is considered half-decent? I have a 8 cluster distribute+replicate setup and I am getting about 65MB/s and about 1.5K IOPS. Considering that I am only using a single two disk SAS strip in each host I think that is not bad. > Alternatively, look at something like ganeti (which by default runs on top > of drbd+LVM, although you can also use it to manage a cluster which uses a > shared file store backend like gluster) > > Maybe 3.3.1 will be better. But today, your investment in SSDs is quite > likely to be wasted :-( > >> The idea is to have HA if either one KVM hypervisor or one Glusterfs >> server stop working (failure, maintenance, etc). > > You'd also need some mechanism for starting each VM on node B if node A > fails. You can probably script that, although there are lots of hazards for > the unwary. Maybe better to have the failover done manually. Also check out oVirt, it integrates with Gluster and provides HA. >> 2. We still didn't decide what physical network to choose between FC, FCoE >> and Infiniband. > > Have you ruled out 10G ethernet? If so, why? I agree, we went all 10GBase-T. > (note: using SFP+ ports, either with fibre SFP+s or SFP+ coax cables, gives > much better latency that 10G over RJ45/CAT6) Actually with the new switches like Arista this is less of an issue. >> 3. Would it be better to split the Glusterfs namespace into two gluster >> volumes (one for each hypervisor), each running on a Glusterfs server >> (for the normal case where all servers are running)? > > I don't see how that would help - I expect you would mount both volumes on > both KVM nodes anyway, to allow you to do live migration. Yep ><> Nathan Stratton nathan at robotics.net http://www.robotics.net