Re: Stretch Cluster with rgw and cephfs?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 19, 2021 at 3:50 PM Sean Matheny <sean.matheny@xxxxxxxxxxx> wrote:
>
> Hi all,
>
> The docs for setting up a stretch cluster look fairly straightforward (https://docs.ceph.com/en/latest/rados/operations/stretch-mode/ <https://docs.ceph.com/en/latest/rados/operations/stretch-mode/>), and I can see how this works with RBD pools—i.e. the client only specifies the local monitors when mounting, and the local monitors will only talk to local OSDs (except for replication) as defined in that CRUSH datacenter. So far, so good.
>
> But is it possible / supported / advisable to run rgw or CephFS in stretch mode? I’m not sure what the deployment should look like— eg. separate rgw and mds for each datacenter? Are the daemons smart enough (or is configuration required) to ensure they’re only communicating with local monitors (and thus local OSDs)? Or will there inherently be cross-DC traffic (other than replication)?

Nothing forces the MDS or RGW daemons to talk to local monitors, but
that shouldn't in itself be too much of an issue -- their
communication with the monitor cluster is very low-traffic and not
latency-sensitive.

Your bigger concern will be data mapping and latencies. We haven't
constructed anything yet which will, for instance, place CephFS data
in a local DC and have MDSes take local responsibility for it. You can
do this using file layouts (and making a new FS for each datacenter,
if you want them both to be active — so that MDS metadata storage is
local as well) but you need to construct all the pools and set the
layouts yourself, and think through the implications of higher
latency.
-Greg

>
> Hope that made sense. Thanks for any words of wisdom. :)
>
> Sean Matheny
> New Zealand eScience Infrastructure (NeSI)
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux