Re: Quincy recovery load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Do you mean load average as reported by `top` or `uptime`?
yes

> That figure can be misleading on multi-core systems.  What CPU are you
using?
It's a 4c/4t low end CPU

/Jimmy

On Wed, Jul 6, 2022 at 4:52 PM Anthony D'Atri <anthony.datri@xxxxxxxxx>
wrote:

> Do you mean load average as reported by `top` or `uptime`?
>
> That figure can be misleading on multi-core systems.  What CPU are you
> using?
>
> For context, when I ran systems with 32C/64T and 24x SATA SSD, the load
> average could easily hit 40-60 without anything being wrong.
>
> What CPU  percentages in user, system, idle, iowait do you see?
>
>
> > On Jul 6, 2022, at 5:32 AM, Jimmy Spets <jimmy@xxxxxxxxx> wrote:
> >
> > Hi all
> >
> >
> >
> > I have a 10 node cluster with fairly modest hardware (6 HDD, 1 shared
> NVME for DB on each) on the nodes that I use for archival.
> >
> > After upgrading to Quincy I noticed that load avg on my servers is very
> high during recovery or rebalance.
> >
> > Changing the OSD recovery priority does not work, I assume because of
> the change to mClock.
> >
> > Is the high load avg the expected behaviour?
> >
> > Should I adjust some limits so that the scheduler does not overwhelm the
> server?
> >
> >
> >
> > /Jimmy
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux