Scrubbing optymalisation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Cephers.

I’m looking for solution to scrubbing process optimization. In our environment this process make big impact on performance. For monitoring disks we are using monitorix. If process running ‘Disk I/O activity (R+W)’  shows 20-60 reads+writes per second. After disabling scrub and deep-scrub process this values goes to 0-40reads+write. It makes difference in performance.

Ceph config settings:

ceph --admin-daemon /var/run/ceph/ceph-osd.1.asok config show | grep ioprio
  "osd_disk_thread_ioprio_class": "idle",
  "osd_disk_thread_ioprio_priority": "7",

All disks have cfq scheduler enabled.

Cluster have 6 servers, 5 monitors, 4-6 osd per server + 1 ssd for journal in each server.

 

Maybe can I set some other config options to reduce impact of scrubbing process? In attachment screen from monitorix (srubbing disabled in 27 week).

 

Best Regards,

Mateusz

Attachment: ceph-scrubbing.png
Description: PNG image

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux