Re: Client Location

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi There

The basic setup Im trying to get is a backend to a Hypervisor cluster, so that auto-failover and live migration works. The mail thing is that we have a number of datacenters with a gigabit interconnect that is not always 100% reliable. In the event of a failure we want all the virtual machines to fail over to the remaining datacenters, so we need all the data in each location.
The other issue is that within each datacenter we can use link aggregation to increase the bandwidth between hypervisors and the ceph cluster but between the datacenters we only have the gigabit so it become essential to have the hyperviors looking at the storage in the same datacenter.
Another consideration is that the virtual machines might get migrated between datacenters without any failure, and the main problem I see with Mark suggests is that in this mode the migrated VM would still be connecting to the OSD's in the remote datacenter.

Tbh Im fairly new to ceph and I know im asking for everything and the kitchen sink! Any thoughts would be very helpful though.

Thanks
James

----- Original Message -----
From: "Gregory Farnum" <greg@xxxxxxxxxxx>
To: "Mark Kampe" <mark.kampe@xxxxxxxxxxx>
Cc: "James Horner" <james.horner@xxxxxxxxxxxxxxx>, ceph-devel@xxxxxxxxxxxxxxx
Sent: Tuesday, October 9, 2012 5:48:37 PM
Subject: Re: Client Location

On Tue, Oct 9, 2012 at 9:43 AM, Mark Kampe <mark.kampe@xxxxxxxxxxx> wrote:
> I'm not a real engineer, so please forgive me if I misunderstand,
> but can't you create a separate rule for each data center (choosing
> first a local copy, and then remote copies), which should ensure
> that the primary is always local.  Each data center would then
> use a different pool, associated with the appropriate location-
> sensitive rule.
>
> Does this approach get you the desired locality preference?

This sounds right to me — I think maybe there's a misunderstanding
about how CRUSH works. What precisely are you after, James?
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux