Hi,On 16 Dec 2014, at 05:00, Christian Balzer < chibi@xxxxxxx> wrote:
Hello,On Mon, 15 Dec 2014 09:23:23 +0100 Josef Johansson wrote:Hi Christian,
We’re using Proxmox that has support for HA, they do it per-vm. We’re doing it manually right now though, because we like it :).
When I looked at it I couldn’t see a way of just allowing a set of hosts in the HA (i.e. not the storage nodes), but that’s probably easy to solve.
Ah, Proxmox. I test drove this about a year ago and while it has some nicefeatures the "black box" approach of taking over bare metal hardware andthe ancient kernel doesn't mesh with other needs I have here.
The ancient kernel is not needed if you’re running just KVM. They are working on a 3.10 kernel if I’m correct though. As it’s Debian 7 in the bottom now, just put in a back ported kernel and you’re good to go. 3.14 was bad but 3.15 should be ok. And it has Ceph support now a days :)
Cheers, Josef Thanks for reminding me, though.
No problemo :) ChristianCheers, Josef
On 15 Dec 2014, at 04:10, Christian Balzer <chibi@xxxxxxx> wrote:
Hello,
What are people here using to provide HA KVMs (and with that I mean automatic, fast VM failover in case of host node failure) in with RBD images?
Openstack and ganeti have decent Ceph/RBD support, but no HA (plans aplenty though).
I have plenty of experience with Pacemaker (DRBD backed) but there is only an unofficial RBD resource agent for it, which also only supports kernel based RBD. And while Pacemaker works great, it scales like leaden porcupines, things degrade rapidly after 20 or so instances.
So what are other people here using to keep their KVM based VMs up and running all the time?
Regards,
Christian -- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Fusion Communications http://www.gol.com/ _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-- Christian Balzer Network/Systems Engineer chibi@xxxxxxx Global OnLine Japan/Fusion Communicationshttp://www.gol.com/
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com