Large number of misplaced PGs but little backfill going on

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

We have this after adding some hosts and changing crush failure domain to datacenter:

pgs:     1338512379/3162732055 objects misplaced (42.321%)
         5970 	  active+remapped+backfill_wait
         4853 active+clean
         11   active+remapped+backfilling

We have 3 datacenters each with 6 hosts and ~400 HDD OSDs with DB/WAL on NVMe. Using mclock with high_recovery_ops profile.

What is the bottleneck here? I would have expected a huge number of simultaneous backfills. Backfill reservation logjam?

Mvh.

Torkil

--
Torkil Svensgaard
Systems Administrator
Danish Research Centre for Magnetic Resonance DRCMR, Section 714
Copenhagen University Hospital Amager and Hvidovre
Kettegaard Allé 30, 2650 Hvidovre, Denmark
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux