about HA infrastructure for hypervisors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 27, 2012 at 10:06:30AM +0200, Nicolas Sebrecht wrote:
> We are going to try glusterfs for our new HA servers.
> 
> To get full HA, I'm thinking of building it this way:
> 
>   +----------------+                  +----------------+
>   |                |                  |                |
>   | KVM hypervisor |-----+    +-------| KVM hypervisor |
>   |                |     |    |       |                |
>   +----------------+     |    |       +----------------+
>                          |    |
>                         +------+
>                         |switch|
>                         +------+
>                          |    |
>   +---------------+      |    |        +---------------+
>   |               |      |    |        |               |
>   | Glusterfs 3.3 |------+    +--------| Glusterfs 3.3 |
>   |   server A    |                    |   server B    |
>   |               |                    |               |
>   +---------------+                    +---------------+

I've made a test setup like this, but unfortunately I haven't yet been able
to get half-decent performance out of glusterfs 3.3 as a KVM backend.  It
may work better if you use local disk for the VM images, and within the VM
mount the glusterfs volume for application data.

Alternatively, look at something like ganeti (which by default runs on top
of drbd+LVM, although you can also use it to manage a cluster which uses a
shared file store backend like gluster)

Maybe 3.3.1 will be better. But today, your investment in SSDs is quite
likely to be wasted :-(

> The idea is to have HA if either one KVM hypervisor or one Glusterfs
> server stop working (failure, maintenance, etc).

You'd also need some mechanism for starting each VM on node B if node A
fails.  You can probably script that, although there are lots of hazards for
the unwary.  Maybe better to have the failover done manually.

> 2. We still didn't decide what physical network to choose between FC, FCoE
> and Infiniband.

Have you ruled out 10G ethernet? If so, why?

(note: using SFP+ ports, either with fibre SFP+s or SFP+ coax cables, gives
much better latency that 10G over RJ45/CAT6)

> 3. Would it be better to split the Glusterfs namespace into two gluster
> volumes (one for each hypervisor), each running on a Glusterfs server
> (for the normal case where all servers are running)?

I don't see how that would help - I expect you would mount both volumes on
both KVM nodes anyway, to allow you to do live migration.


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux