Re: Planning cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Never ever use osd pool default min size = 1

this will break your neck and does not make sense really.

:-)

On Mon, Jul 10, 2023 at 7:33 PM Dan van der Ster
<dan.vanderster@xxxxxxxxx> wrote:
>
> Hi Jan,
>
> On Sun, Jul 9, 2023 at 11:17 PM Jan Marek <jmarek@xxxxxx> wrote:
>
> > Hello,
> >
> > I have a cluster, which have this configuration:
> >
> > osd pool default size = 3
> > osd pool default min size = 1
> >
>
> Don't use min_size = 1 during regular stable operations. Instead, use
> min_size = 2 to ensure data safety, and then you can set the pool to
> min_size = 1 manually in the case of an emergency. (E.g. in case the 2
> copies fail and will not be recoverable).
>
>
> > I have 5 monitor nodes and 7 OSD nodes.
> >
>
> 3 monitors is probably enough. Put 2 in the same DC with 2 replicas, and
> the other in the DC with 1 replica.
>
>
> > I have changed a crush map to divide ceph cluster to two
> > datacenters - in the first one will be a part of cluster with 2
> > copies of data and in the second one will be part of cluster
> > with one copy - only emergency.
> >
> > I still have this cluster in one
> >
> > This cluster have a 1 PiB of raw data capacity, thus it is very
> > expensive add a further 300TB capacity to have 2+2 data redundancy.
> >
> > Will it works?
> >
> > If I turn off the 1/3 location, will it be operational?
>
>
> Yes the PGs should be active and accept IO. But the cluster will be
> degraded, it cannot stay in this state permanently. (You will need to
> recover the 3rd replica or change the crush map).
>
>
>
> > I
> > believe, it is a better choose, it will. And what if "die" 2/3
> > location?
>
>
> with min_size = 2, the PG wil be inactive. but the data will be safe. If
> this happens, then set min_size=1 to activate the PGs.
> Mon will not have quorum though -- you need a plan for that. And also plan
> where you put your MDSs.
>
> -- dan
>
> ______________________________________________________
> Clyso GmbH | Ceph Support and Consulting | https://www.clyso.com
>
>
>
>
> > On this cluster is pool with cephfs - this is a main
> > part of CEPH.
> >
> > Many thanks for your notices.
> >
> > Sincerely
> > Jan Marek
> > --
> > Ing. Jan Marek
> > University of South Bohemia
> > Academic Computer Centre
> > Phone: +420389032080
> > http://www.gnu.org/philosophy/no-word-attachments.cs.html
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux