Too much pgs backfilling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

 

This is my configuration :

-> "osd_max_backfills": "1",
-> "osd_recovery_threads": "1"
->  "osd_recovery_max_active": "1",
-> "osd_recovery_op_priority": "3",

-> "osd_client_op_priority": "63",

 

I have run command :  ceph osd crush tunables optimal  

After  upgrade Hammer to Jewel.

 

My cluster is overloaded on : pgs backfilling  : 15  active+remapped+backfilling . ..

 

Why 15 ? My configuration is bad ? normally I should have max 1

 

Thanks

 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux