Re: Clarification of documentation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greg,

My name's Zac and I'm the docs guy for the Ceph Foundation. I have a
long-term plan to create a document that collects error codes and failure
cases, but I am only one man and it will be a few months before I can begin
on it.

Zac Dover
Ceph Docs Guy

On Wed, May 20, 2020 at 4:32 AM Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:

> On Tue, May 19, 2020 at 10:34 AM Benjeman Meekhof <bmeekhof@xxxxxxxxx>
> wrote:
> >
> > It is possible to run a ceph cluster over a WAN if you have reliable
> > enough WAN with sites close enough for low-ish latency.  The OSiRIS
> > project is architected that way with Ceph services spread evenly
> > across three university sites in Michigan.  There's more information
> > and contact on their website: http://www.osris.org
> >
> > We had a variety of interesting WAN outages which Ceph has always
> > handled well in terms of not losing data or our cluster definitions.
> > Outages were at times further complicated by inconsistent pathing for
> > cluster and backend networks such that only one or the other might be
> > up to some sites.  In all that, with 3 mons situated 1 per site, we
> > never encountered any kind of split brain situations.
>
> Oooh, are those stories written down anywhere? I'm working on
> explicitly supporting 2-site stretch clusters right now (with a
> "tiebreaker monitor" in a third site, and the main thing is handling
> those networking issues) but I imagine we'll extend it to do 3 sites
> in the future. If I have some real-world failure experiences to
> validate against that'd be good.
> -Greg
>
> >
> > Though I'm no longer involved the project is still ongoing and I'm
> > sure if you want to reach out they (or I personally) would be happy to
> > answer any questions.
> >
> > thanks,
> > Ben
> >
> > On Tue, May 19, 2020 at 1:03 PM Nathan Fish <lordcirth@xxxxxxxxx> wrote:
> > >
> > > It is my understanding that it refers to running a single, normal ceph
> > > cluster with it's component hosts connected over WAN. This would
> > > require OSDs to connect to other OSDs and mons over WAN for nearly
> > > every operation, and is not likely to perform acceptably.
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux