Re: objects misplaced jumps up at 5%

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Does changing `target_max_misplaced_ratio` result in more PGP being
created in each cycle of the remapping?  Would this result in fewer
copies of data or just more PGs being processed in each batch during a
change of PG numbers?

What is a safe value to raise `target_max_misplaced_ratio` to, given
that the default is 0.05 or 5% misplaced objects?

-Matt

On Wed, Sep 30, 2020 at 3:20 AM Jake Grimmett <jog@xxxxxxxxxxxxxxxxx> wrote:
>
> Dear All,
>
> great advice - thank you all so much.
>
> I've changed the pgp to 8192 (it had already risen to 11857) and see how
> this works. The target_max_misplaced_ratio looks like a useful control.
>
> It's a shame the ceph pg calc page <https://ceph.io/pgcalc/>  doesn't
> have more advice for people using erasure pools...
>
> We are planning to go from 550 osd to 700 osds soon, and eventually to
> 900 osds.
>
> What is the ideal pg count for 900 osds on an EC 8+2 pool that will
> probably reach 80% full?
>
> best regards,
>
> Jake
>
> On 30/09/2020 04:50, 胡 玮文 wrote:
> > Hi,
> >
> > I’ve just read a post that describe the exact behavior you
> > describe. https://ceph.io/rados/new-in-nautilus-pg-merging-and-autotuning/
> >
> > There is a config option named /target_max_misplaced_ratio/, which
> > defaults to 5%. You can change this to accelerate the remap process.
> >
> > Hopes that’s helpful.
> >
> > Sent from my iPad
> >
> >> On Sep 29, 2020, at 18:34, Jake Grimmett <jog@xxxxxxxxxxxxxxxxx> wrote:
> >>
> >> Hi Paul,
> >>
> >> I think you found the answer!
> >>
> >> When adding 100 new OSDs to the cluster, I increased both pg and pgp
> >> from 4096 to 16,384
> >>
> >> **********************************
> >> [root@ceph1 ~]# ceph osd pool set ec82pool pg_num 16384
> >> set pool 5 pg_num to 16384
> >>
> >> [root@ceph1 ~]# ceph osd pool set ec82pool pgp_num 16384
> >> set pool 5 pgp_num to 16384
> >>
> >> **********************************
> >>
> >> The pg number increased immediately as seen with "ceph -s"
> >>
> >> But unknown to me, the pgp number did not increase immediately.
> >>
> >> "ceph osd pool ls detail" shows that pgp is currently 11412
> >>
> >> Each time we hit 5.000% misplaced, the pgp number increases by 1 or 2,
> >> this causes the % misplaced to increase again to ~5.1%
> --
> Dr Jake Grimmett
> Head Of Scientific Computing
> MRC Laboratory of Molecular Biology
> Francis Crick Avenue,
> Cambridge CB2 0QH, UK.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



-- 
Matt Larson, PhD
Madison, WI  53705 U.S.A.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux