Re: 6 pgs not deep-scrubbed in time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Setting osd_max_scrubs = 2 for HDD OSDs was a mistake I made. The result was that PGs needed a bit more than twice as long to deep-scrub. Net effect: high scrub load, much less user IO and, last but not least, the "not deep-scrubbed in time" problem got worse, because (2+eps)/2 > 1.

For spinners a consideration looking at the actually available drive performance is required, plus a few things more, like PG count, distribution etc.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Wesley Dillingham <wes@xxxxxxxxxxxxxxxxx>
Sent: Monday, January 29, 2024 7:14 PM
To: Michel Niyoyita
Cc: Josh Baergen; E Taka; ceph-users
Subject:  Re: 6 pgs not deep-scrubbed in time

Respond back with "ceph versions" output

If your sole goal is to eliminate the not scrubbed in time errors you can
increase the aggressiveness of scrubbing by setting:
osd_max_scrubs = 2

The default in pacific is 1.

if you are going to start tinkering manually with the pg_num you will want
to turn off the pg autoscaler on the pools you are touching.
reducing the size of your PGs may make sense and help with scrubbing but if
the pool has a lot of data it will take a long long time to finish.





Respectfully,

*Wes Dillingham*
wes@xxxxxxxxxxxxxxxxx
LinkedIn <http://www.linkedin.com/in/wesleydillingham>


On Mon, Jan 29, 2024 at 10:08 AM Michel Niyoyita <micou12@xxxxxxxxx> wrote:

> I am running ceph pacific , version 16 , ubuntu 20 OS , deployed using
> ceph-ansible.
>
> Michel
>
> On Mon, Jan 29, 2024 at 4:47 PM Josh Baergen <jbaergen@xxxxxxxxxxxxxxxx>
> wrote:
>
> > Make sure you're on a fairly recent version of Ceph before doing this,
> > though.
> >
> > Josh
> >
> > On Mon, Jan 29, 2024 at 5:05 AM Janne Johansson <icepic.dz@xxxxxxxxx>
> > wrote:
> > >
> > > Den mån 29 jan. 2024 kl 12:58 skrev Michel Niyoyita <micou12@xxxxxxxxx
> >:
> > > >
> > > > Thank you Frank ,
> > > >
> > > > All disks are HDDs . Would like to know if I can increase the number
> > of PGs
> > > > live in production without a negative impact to the cluster. if yes
> > which
> > > > commands to use .
> > >
> > > Yes. "ceph osd pool set <poolname> pg_num <number larger than before>"
> > > where the number usually should be a power of two that leads to a
> > > number of PGs per OSD between 100-200.
> > >
> > > --
> > > May the most significant bit of your life be positive.
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux