I spoke with the cloud stack guys on IRC yesterday and the only risk is when libvirtd starts. Ceph is supported only with libvirt. Cloudstack can only pass one monitor to libvirt even though libvirt can use more. Libvirt uses that info when it boots, but after that it gets all the monitors from that initial one, just as you say. If you have to reboot libvirtd when that monitor is down, that's a problem. But RR DNS would mean just restarting libvirtd again will probably fix it.
On Feb 25, 2017 6:56 AM, "Wido den Hollander" <wido@xxxxxxxx> wrote:
> Op 24 februari 2017 om 19:48 schreef Adam Carheden <adam.carheden@xxxxxxxxx>:
>No, it doesn't. librados will failover over to another Monitor.
>
> From the docs for each project:
>
> "When a primary storage outage occurs the hypervisor immediately stops
> all VMs stored on that storage
> device"http://docs.cloudstack.apache.org/projects/ cloudstack-administration/en/ 4.8/reliability.html
>
> "CloudStack will only bind to one monitor (You can however create a
> Round Robin DNS record over multiple
> monitors)"http://docs.ceph.com/docs/master/rbd/rbd- cloudstack/
>
> Doesn't this mean that if the CEPH monitor cloudstack chooses to bind to
> goes down all your VMs stop? If so, that seems pretty risky.
>
No, you are understanding it wrongly. CloudStack doesn't perform the DNS lookup, this is done by librados on the hypervisor.
> RRDNS is for poor man's load balancing, not HA. I guess it depends on when
> Cloudstack does DNS lookup and if there's some minimum unavailable delay
> before it flags primary storage as offline, but it seems like substituting
> RRDNS for whatever CEPH's internal "find an available monitor" algorithm is
> is a bad idea.
It will receive all Monitors from that DNS lookup and connect to one of them. As soon as it does it will obtain the monmap and know the full topology.
Fully redundant and failover proof.
Wido
>
> --
> Adam Carheden
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com