Re: Lousy recovery for mclock and reef

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Is that a setting that can be applied runtime or does it req osd restart?

On Fri, May 24, 2024 at 9:59 AM Joshua Baergen <jbaergen@xxxxxxxxxxxxxxxx>
wrote:

> Hey Chris,
>
> A number of users have been reporting issues with recovery on Reef
> with mClock. Most folks have had success reverting to
> osd_op_queue=wpq. AIUI 18.2.3 should have some mClock improvements but
> I haven't looked at the list myself yet.
>
> Josh
>
> On Fri, May 24, 2024 at 10:55 AM Mazzystr <mazzystr@xxxxxxxxx> wrote:
> >
> > Hi all,
> > Goodness I'd say it's been at least 3 major releases since I had to do a
> > recovery.  I have disks with 60-75,000 power_on_hours.  I just updated
> from
> > Octopus to Reef last month and I'm hit with 3 disk failures and the
> mclock
> > ugliness.  My recovery is moving at a wondrous 21 mb/sec after some
> serious
> > hacking.  It started out at 9 mb/sec.
> >
> > My hosts are showing minimal cpu use.  normal mem use.  0-6% disk
> > business.  Load is minimal so processes aren't blocked by disk io.
> >
> > I tried the changing all the sleeps and recovery_max and
> > setting osd_mclock_profile high_recovery_ops to no change in performance.
> >
> > Does anyone have any suggestions to improve performance?
> >
> > Thanks,
> > /Chris C
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux