Re: Slow recovery on Quincy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

This is a known issue in quincy which uses mClock scheduler. There is a fix
for this which should be
available in 17.2.6+ releases.

You can confirm the active scheduler type on any osd using:

 ceph config show osd.0 osd_op_queue

If the active scheduler is 'mclock_scheduler', you can try switching the
mClock profile to
'high_recovery_ops' on all OSDs to speed up the backfilling using:

ceph config set osd osd_mclock_profile high_recovery_ops

After the backfilling is complete, you can switch the mClock profile back
to the default value using:

ceph config rm osd osd_mclock_profile


On Tue, May 16, 2023 at 4:46 PM Sake Paulusma <sake1989@xxxxxxxxxxx> wrote:

>
> We noticed extremely slow performance when remapping is necessary. We
> didn't do anything special other than assigning the correct device_class
> (to ssd). When checking ceph status, we notice the number of objects
> recovering is around 17-25 (with watch -n 1 -c ceph status).
>
> How can we increase the recovery process?
>
> There isn't any client load, because we're going to migrate to this
> cluster in the future, so only a rsync once a while is being executed.
>
> [ceph: root@pwsoel12998 /]# ceph status
>   cluster:
>     id:     da3ca2e4-ee5b-11ed-8096-0050569e8c3b
>     health: HEALTH_WARN
>             noscrub,nodeep-scrub flag(s) set
>
>   services:
>     mon: 5 daemons, quorum
> pqsoel12997,pqsoel12996,pwsoel12994,pwsoel12998,prghygpl03 (age 3h)
>     mgr: pwsoel12998.ylvjcb(active, since 3h), standbys: pqsoel12997.gagpbt
>     mds: 4/4 daemons up, 2 standby
>     osd: 32 osds: 32 up (since 73m), 32 in (since 6d); 10 remapped pgs
>          flags noscrub,nodeep-scrub
>
>   data:
>     volumes: 2/2 healthy
>     pools:   5 pools, 193 pgs
>     objects: 13.97M objects, 853 GiB
>     usage:   3.5 TiB used, 12 TiB / 16 TiB avail
>     pgs:     755092/55882956 objects misplaced (1.351%)
>              183 active+clean
>              10  active+remapped+backfilling
>
>   io:
>     recovery: 2.3 MiB/s, 20 objects/s
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>

-- 

Sridhar Seshasayee

Partner Engineer

Red Hat <https://www.redhat.com>
<https://www.redhat.com>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux