Re: Lousy recovery for mclock and reef

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I suspect my initial spike in performance was pg's balancing between the
three osd of the one host.

host load is very low, under 1.  hdd iops on the three discs hover
around 80 +/- 5.  atop shows about 20% business.  Gig-ethernet shows about
20% utilized according to atop.  I find it extremely hard to believe .2
gig-e can swamp three hdds.

I give up.  Recovery performance is what it is.


Next week when this operation completes I have two more osd recovery
operations where the osds are located on two different hosts.  It'll be
interesting to

/C

On Mon, May 27, 2024 at 1:17 AM Sridhar Seshasayee <sseshasa@xxxxxxxxxx>
wrote:

> With mClock, osd_max_backfills and osd_recovery_max_active can be modified
> at
> runtime after setting osd_mclock_override_recovery_settings to true. See
> the docs
> <https://docs.ceph.com/en/latest/rados/configuration/mclock-config-ref/#steps-to-modify-mclock-max-backfills-recovery-limits>
> for more information.
>
> There's no change in the behavior when recovery/backfill  limits are
> modified with
> mClock enabled.
>
> I suspect when you added a new osd, the recovery traffic you observed
> could just
> be related to the backfill operation trying to move data due to PGs mapped
> to the
> new osd.
>
> -Sridhar
>
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux