Re: Recovery/Backfill Speedup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04.10.2016 16:31, Reed Dier wrote:
Attempting to expand our small ceph cluster currently.

Have 8 nodes, 3 mons, and went from a single 8TB disk per node to 2x 8TB disks per node, and the rebalancing process is excruciatingly slow.

Originally at 576 PGs before expansion, and wanted to allow rebalance to finish before expanding the PG count for the single pool, and the replication size.

I have stopped scrubs for the time being, as well as set client and recovery io to equal parts so that client io is not burying the recovery io. Also have increased the number of recovery threads per osd.

[osd]
osd_recovery_threads = 5
filestore_max_sync_interval = 30
osd_client_op_priority = 32
osd_recovery_op_priority = 32
Also, this is 10G networking we are working with and recovery io typically hovers between 0-35 MB’s but typically very bursty.
Disks are 8TB 7.2k SAS disks behind an LSI 3108 controller, configured as individual RAID0 VD’s, with pdcache disabled, but BBU backed write back caching enabled at the controller level.

Have thought about increasing the ‘osd_max_backfills’ as well as ‘osd_recovery_max_active’, and possibly ‘osd_recovery_max_chunk’ to attempt to speed it up, but will hopefully get some insight from the community here.

ceph -s about 4 days in:

      health HEALTH_WARN
             255 pgs backfill_wait
             4 pgs backfilling
             385 pgs degraded
             129 pgs recovery_wait
             388 pgs stuck unclean
             274 pgs undersized
             recovery 165319973/681597074 objects degraded (24.255%)
             recovery 298607229/681597074 objects misplaced (43.810%)
             noscrub,nodeep-scrub,sortbitwise flag(s) set
      monmap e1: 3 mons at {core=10.0.1.249:6789/0,db=10.0.1.251:6789/0,dev=10.0.1.250:6789/0}
             election epoch 190, quorum 0,1,2 core,dev,db
      osdmap e4226: 16 osds: 16 up, 16 in; 303 remapped pgs
             flags noscrub,nodeep-scrub,sortbitwise
       pgmap v1583742: 576 pgs, 2 pools, 6426 GB data, 292 Mobjects
             15301 GB used, 101 TB / 116 TB avail
             165319973/681597074 objects degraded (24.255%)
             298607229/681597074 objects misplaced (43.810%)
                  249 active+undersized+degraded+remapped+wait_backfill
                  188 active+clean
                   85 active+recovery_wait+degraded
                   22 active+recovery_wait+degraded+remapped
                   22 active+recovery_wait+undersized+degraded+remapped
                    3 active+remapped+wait_backfill
                    3 active+undersized+degraded+remapped+backfilling
                    3 active+degraded+remapped+wait_backfill
                    1 active+degraded+remapped+backfilling
recovery io 9361 kB/s, 415 objects/s
   client io 597 kB/s rd, 62 op/s rd, 0 op/s wr
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


4 pgs backfilling

this sounds incredibly low for your configuration. you do not say anything about The default is 10. so with 8 nodes each having 1 osd writing and 1 osd reading you should see much more then 4 pgs backfilling at any given time. theoretical max beeing 8*10 = 80

check what your current max backfill value is. and try setting osd-max-backfill higher, preferable in smaller increments while monitoring how many pg's are backfilling and the load on machines and network.

kind regards
Ronny Aasen




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux