Re: Increase the recovery throughput

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Monish,

you are probably on mclock scheduler, which ignores these settings. You might want to set them back to defaults, change the scheduler to wpq and then try again if it needs adjusting. there were several threads about "broken" recovery ops scheduling with mclock in the latest versions.

So, back to Eugen's answer: go through this list and try solutions of earlier cases.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Monish Selvaraj <monish@xxxxxxxxxxxxxxx>
Sent: 12 December 2022 11:32:26
To: Eugen Block
Cc: ceph-users@xxxxxxx
Subject:  Re: Increase the recovery throughput

Hi Eugen,

We tried that already. the osd_max_backfills is in 24 and the
osd_recovery_max_active is in 20.

On Mon, Dec 12, 2022 at 3:47 PM Eugen Block <eblock@xxxxxx> wrote:

> Hi,
>
> there are many threads dicussing recovery throughput, have you tried
> any of the solutions? First thing to try is to increase
> osd_recovery_max_active and osd_max_backfills. What are the current
> values in your cluster?
>
>
> Zitat von Monish Selvaraj <monish@xxxxxxxxxxxxxxx>:
>
> > Hi,
> >
> > Our ceph cluster consists of 20 hosts and 240 osds.
> >
> > We used the erasure-coded pool with cache-pool concept.
> >
> > Some time back 2 hosts went down and the pg are in a degraded state. We
> got
> > the 2 hosts back up in some time. After the pg is started recovering but
> it
> > takes a long time ( months )  . While this was happening we had the
> cluster
> > with 664.4 M objects and 987 TB data. The recovery status is not changed;
> > it remains 88 pgs degraded.
> >
> > During this period, we increase the pg size from 256 to 512 for the
> > data-pool ( erasure-coded pool ).
> >
> > We also observed (one week ) the recovery to be very slow, the current
> > recovery around 750 Mibs.
> >
> > Is there any way to increase this recovery throughput ?
> >
> > *Ceph-version : quincy*
> >
> > [image: image.png]
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux