I have only 2 scrubs running on hdd's, but keeping the drives in high busy state. I did not notice this before, did some setting change? Because I can remember dstat listing 14MB/s-20MB/s and not 60MB/s DSK | sdd | busy 95% | read 1384 | write 92 | KiB/r 292 | KiB/w 7 | MBr/s 39.5 | MBw/s 0.1 | avq 1.07 | avio 6.40 ms | DSK | sde | busy 80% | read 1449 | write 20 | KiB/r 379 | KiB/w 6 | MBr/s 53.7 | MBw/s 0.0 | avq 1.20 | avio 5.45 ms net/eth4.60-net/eth4.52 --dsk/sda-----dsk/sdb-----dsk/sdc-----dsk/sdd-----dsk/sde-----dsk/sdf--- --dsk/sdg-----dsk/sdh-----dsk/sdi-- recv send: recv send| read writ: read writ: read writ: read writ: read writ: read writ: read writ: read writ: read writ 146k 666k: 382k 126k| 0 4096B: 0 0 : 0 0 : 49M 16k: 75M 0 : 0 0 : 0 0 : 0 296k: 0 0 223k 219k: 206k 269k| 0 51k: 0 0 : 0 0 : 41M 32k: 38M 0 : 0 0 : 0 0 :4096B 40k: 0 0 342k 736k: 419k 183k| 0 0 : 0 40k: 0 0 : 38M 120k: 68M 44k: 0 20k: 0 0 : 0 268k: 0 0 109k 123k: 122k 192k| 0 51k: 0 0 : 0 0 : 38M 20k: 66M 0 : 0 0 : 0 0 : 0 0 : 0 0 213k 776k: 458k 244k| 0 0 : 0 0 : 0 0 : 41M 12k: 66M 0 : 0 0 : 0 0 : 0 268k: 0 0 146k 135k: 136k 197k| 0 47k: 0 0 : 0 0 : 49M 32k: 61M 0 : 0 0 : 0 0 : 0 16k: 0 0 113k 642k: 344k 89k| 0 0 : 0 0 : 0 0 : 41M 24k: 42M 4096B: 0 0 : 0 0 :4096B 308k: 0 0 [@c01 ~]# ceph daemon osd.0 config show| egrep 'max_backfills|osd_max_scrubs' "osd_max_backfills": "1", "osd_max_scrubs": "1", _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com