Cannot get backfill speed up

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi.

Fresh cluster - but despite setting:
jskr@dkcphhpcmgt028:/$ sudo ceph config show osd.0 | grep recovery_max_active_ssd osd_recovery_max_active_ssd 50 mon default[20] jskr@dkcphhpcmgt028:/$ sudo ceph config show osd.0 | grep osd_max_backfills osd_max_backfills 100 mon default[10]

I still get
jskr@dkcphhpcmgt028:/$ sudo ceph status
  cluster:
    id:     5c384430-da91-11ed-af9c-c780a5227aff
    health: HEALTH_OK

  services:
mon: 3 daemons, quorum dkcphhpcmgt031,dkcphhpcmgt029,dkcphhpcmgt028 (age 16h) mgr: dkcphhpcmgt031.afbgjx(active, since 33h), standbys: dkcphhpcmgt029.bnsegi, dkcphhpcmgt028.bxxkqd
    mds: 2/2 daemons up, 1 standby
    osd: 40 osds: 40 up (since 45h), 40 in (since 39h); 21 remapped pgs

  data:
    volumes: 2/2 healthy
    pools:   9 pools, 495 pgs
    objects: 24.85M objects, 60 TiB
    usage:   117 TiB used, 159 TiB / 276 TiB avail
    pgs:     10655690/145764002 objects misplaced (7.310%)
             474 active+clean
             15  active+remapped+backfilling
             6   active+remapped+backfill_wait

  io:
    client:   0 B/s rd, 1.4 MiB/s wr, 0 op/s rd, 116 op/s wr
    recovery: 328 MiB/s, 108 objects/s

  progress:
    Global Recovery Event (9h)
      [==========================..] (remaining: 25m)

With these numbers for the setting - I would expect to get more than 15 active backfilling... (and based on SSD's and 2x25gbit network, I can also spend more resources on recovery than 328 MiB/s

Thanks, .

--
Jesper Krogh
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux