Gluster v 3.3 with KVM and High Availability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Andrew:

I'm using the FUSE client and running qemu-KVM on the bricks.   A replicated Gluster gives me "live" migration.  I tick the pause the VM  box, the migration actually converges faster that way.  To Jeff's comment,  I'm using 3.3 and find the performance good, though I have done no serious performance testing, and my VMs are non-demanding.    I didn't try VMs on 3.2.6.   Use virtio storage and writeback cache settings.     I know that this isn't the defined sweet spot for Gluster, but it is really a nice scalable setup for a lab.

As far as fencing  goes, I have done nothing.     I'm manually as carefully as I can manage  using virt-manager.    I've already accidentally started the same VM on two bricks.  Watch your autostart settings on the VMs :-)   

I'm waiting for oVirt 3.1 later this month to manage the cluster so I don't do this again  :-) 

Jim

> Jeff,

> Thanks for the response, I did see a couple of threads from the archives mentioning trying to do what I am proposing but I am looking for some more details on how they glued everything together to make it work. Like did they use NFS or the native FUSE client, if >using NFS how did they make that highly available? What about clustering tools like corosync, does using it with Gluster have special considerations?
>Those sorts of questions.


> /-\ ndrew

> ? On Wed, Jul 11, 2012 at 7:35 AM, Jeff White <jaw171 at pitt.edu> wrote:
> > 3.3 brought granular locking, which is very useful with VMs.  There's 
> > been talk on the list about running VMs on Gluster that you can search for.
> >
>>  I tried it in 3.2.6 and gave up, I haven't tried it on 3.3 yet.
>
>>  Jeff White - GNU+Linux Systems Engineer University of Pittsburgh - 
>> CSSD
>>
>>
>>
>> On 07/10/2012 05:32 PM, Andrew Niemantsverdriet wrote:
>>>
>>> I am looking to build a proof of concept cluster using Gluster as the 
>>>  storage back-end.
>>
>>>  I have looked through the mailing list archives and have seen that 
>>>  many others before have done this but what I can find is what 
>>>  technologies were used to complete the task. Also there have been 
>>>  many reports on poor performance with running KVM images on Gluster 
>>>  has version 3.3 fixed many of these "problems"?
>>
>>>  Would anyone care to share what they are using for their technology 
>>>  stack and any comments on how it works?
>>> 
>>>  Thanks, 

 _
> /-\ ndrew Niemantsverdriet
> Linux System Administrator
> Academic Computing
> (406) 238-7360
> Rocky Mountain College
> 1511 Poly Dr.
> Billings MT, 59102


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux