Re: Stretch cluster experiences in production?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 15, 2021 at 8:22 AM Matthew Vernon <mvernon@xxxxxxxxxxxxx> wrote:
>
> Hi,
>
> Stretch clusters[0] are new in Pacific; does anyone have experience of
> using one in production?
>
> I ask because I'm thinking about new RGW cluster (split across two main
> DCs), which I would naturally be doing using RGW multi-site between two
> clusters.
>
> But it strikes me that a stretch cluster might be simpler (multi-site
> RGW isn't entirely straightforward e.g. round resharding), and 2 copies
> per site is quite a bit less storage than 3 per site. But I'm not sure
> if this new feature is considered production-deployment-ready
>
> Also, if I'm using RGWs, will they do the right thing location-wise?
> i.e. DC A RGWs will talk to DC A OSDs wherever possible?

Stretch clusters are entirely a feature of the RADOS layer at this
point; setting up RGW/RBD/CephFS to use them efficiently is left as an
exercise to the user. Sorry. :/

That said, I don't think it's too complicated — you want your CRUSH
rule to specify a single site as the primary and to run your active
RGWs on that side, or else to configure read-from-replica and local
reads if your workloads support them. But so far the expectation is
definitely that anybody deploying this will have their own
orchestration systems around it (you can't really do HA from just the
storage layer), whether it's home-brewed or Rook in Kubernetes, so we
haven't discussed pushing it out more within Ceph itself.
-Greg

>
> Thanks,
>
> Matthew
>
> [0] https://docs.ceph.com/en/latest/rados/operations/stretch-mode/
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux