Re: Cluster crashing when stopping some host

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm using Host as Failure Domain.

Em qui., 13 de out. de 2022 às 11:41, Eugen Block <eblock@xxxxxx> escreveu:

> What is your failure domain? If it's osd you'd have both PGs on the
> same host and then no replica is available.
>
> Zitat von Murilo Morais <murilo@xxxxxxxxxxxxxx>:
>
> > Eugen, thanks for responding.
> >
> > In the current scenario there is no way to insert disks into dcs3.
> >
> > My pools are size 2, at the moment we can't add more machines with disks,
> > so it was sized in this proportion.
> >
> > Even with min_size=1, if dcs2 stops the IO also stops.
> >
> > Em qui., 13 de out. de 2022 às 11:19, Eugen Block <eblock@xxxxxx>
> escreveu:
> >
> >> Hi,
> >>
> >> if your pools have a size 2 (don't do that except in test
> >> environments) and host is your failure domain then all IO is paused if
> >> one osd host goes down, depending on your min_size. Can you move some
> >> disks to dcs3 so you can have size 3 pools with min_size 2?
> >>
> >> Zitat von Murilo Morais <murilo@xxxxxxxxxxxxxx>:
> >>
> >> > Good morning everyone.
> >> >
> >> > I'm having strange behavior on a new cluster.
> >> >
> >> > I have 3 machines, two of them have the disks. We can name them like
> >> this:
> >> > dcs1 to dcs3. The dcs1 and dcs2 machines contain the disks.
> >> >
> >> > I started bootstrapping through dcs1, added the other hosts and left
> mgr
> >> on
> >> > dcs3 only.
> >> >
> >> > What is happening is that if I take down dcs2 everything hangs and
> >> becomes
> >> > irresponsible, including the mount points that were pointed to dcs1.
> >> > _______________________________________________
> >> > ceph-users mailing list -- ceph-users@xxxxxxx
> >> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> >>
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
>
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux