Re: Failover with 2 nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This also sounds like a possible GlusterFS use case.

Regards,
-Jamie

On Tue, Jun 15, 2021 at 12:30 PM Burkhard Linke <
Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:

> Hi,
>
> On 15.06.21 16:15, Christoph Brüning wrote:
> > Hi,
> >
> > That's right!
> >
> > We're currently evaluating a similar setup with two identical HW nodes
> > (on two different sites), with OSD, MON and MDS each, and both nodes
> > have CephFS mounted.
> >
> > The goal is to build a minimal self-contained shared filesystem that
> > remains online during planned updates and can somehow survive should
> > disaster strike at one of the two sites.
>
>
> This sounds like a use case for DRBD, maybe with OCFS2 on top as
> cluster(ed) filesystem. Ceph is overkill, and not really suited for two
> hosts setups.
>
>
> Regards,
>
> Burkhard
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>


-- 
Jamie Fargen
Senior Consultant
jfargen@xxxxxxxxxx
813-817-4430
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux