optimal values for osd threads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
My config:
        osd op threads = 8
        osd disk threads = 4
        osd recovery threads = 1
        osd recovery max active = 1
        osd recovery op priority = 10
        osd client op priority = 100
        osd max backfills = 1

I set it to maximize client operation priority and slow backfill
operations ( client first !! :-) )
Once the osd holding the rgw index died, after the restart the cluster
got stuck on "25 active+recovery_wait, 1 active+recovering;"

Please help me choose optimal values for osd recovery threads and
priorty on ceph s3 optimized cluster.

Cluster:
   12 server x 12 osd
   3 mons, 144 osds, 32424 pgs

--
Regards
Dominik
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux