Re: Scrubbing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ray,

Do you known the IOPS/BW of the cluster?  The 16TB HDD is more suitable
for cold data, If the clients' bw/iops is too big, you can never  finish
the scrub.

And if you adjust the priority, it will have a great impact to the clients.

On 3/10/22 9:59 PM, Ray Cunningham wrote:
We have 16 Storage Servers each with 16TB HDDs and 2TB SSDs for DB/WAL, so we are using bluestore. The system is running Nautilus 14.2.19 at the moment, with an upgrade scheduled this month. I can't give you a complete ceph config dump as this is an offline customer system, but I can get answers for specific questions.

Off the top of my head, we have set:

osd_max_scrubs 20
osd_scrub_auto_repair true
osd _scrub_load_threashold 0.6
We do not limit srub hours.

Thank you,
Ray




-----Original Message-----
From: norman.kern <norman.kern@xxxxxxx>
Sent: Wednesday, March 9, 2022 7:28 PM
To: Ray Cunningham <ray.cunningham@xxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxx
Subject: Re:  Scrubbing

Ray,

Can you  provide more information about your cluster(hardware and software configs)?

On 3/10/22 7:40 AM, Ray Cunningham wrote:
   make any difference. Do
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux