Ray,
You can use node-exporter+prom+grafana to collect the load of CPUs
statistics. You can use uptime command to get the current statistics.
On 3/10/22 10:51 PM, Ray Cunningham wrote:
From:
osd_scrub_load_threshold
The normalized maximum load. Ceph will not scrub when the system load (as defined by getloadavg() / number of online CPUs) is higher than this number. Default is 0.5.
Does anyone know how I can run getloadavg() / number of online CPUs so I can see what our load is? Is that a ceph command, or an OS command?
Thank you,
Ray
-----Original Message-----
From: Ray Cunningham
Sent: Thursday, March 10, 2022 7:59 AM
To: norman.kern <norman.kern@xxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: RE: Scrubbing
We have 16 Storage Servers each with 16TB HDDs and 2TB SSDs for DB/WAL, so we are using bluestore. The system is running Nautilus 14.2.19 at the moment, with an upgrade scheduled this month. I can't give you a complete ceph config dump as this is an offline customer system, but I can get answers for specific questions.
Off the top of my head, we have set:
osd_max_scrubs 20
osd_scrub_auto_repair true
osd _scrub_load_threashold 0.6
We do not limit srub hours.
Thank you,
Ray
-----Original Message-----
From: norman.kern <norman.kern@xxxxxxx>
Sent: Wednesday, March 9, 2022 7:28 PM
To: Ray Cunningham <ray.cunningham@xxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: Re: Scrubbing
Ray,
Can you provide more information about your cluster(hardware and software configs)?
On 3/10/22 7:40 AM, Ray Cunningham wrote:
make any difference. Do
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx