On 23-03-2024 10:44, Alexander E. Patrakov wrote:
Hello Torkil,
Hi Alexander
It would help if you provided the whole "ceph osd df tree" and "ceph
pg ls" outputs.
Of course, here's ceph osd df tree to start with:
https://pastebin.com/X50b2W0J
The other output is too big for pastebin and I'm not familiar with paste
services, any suggestion for a preferred way to share such output?
Mvh.
Torkil
On Sat, Mar 23, 2024 at 4:26 PM Torkil Svensgaard <torkil@xxxxxxxx> wrote:
Hi
We have this after adding some hosts and changing crush failure domain
to datacenter:
pgs: 1338512379/3162732055 objects misplaced (42.321%)
5970 active+remapped+backfill_wait
4853 active+clean
11 active+remapped+backfilling
We have 3 datacenters each with 6 hosts and ~400 HDD OSDs with DB/WAL on
NVMe. Using mclock with high_recovery_ops profile.
What is the bottleneck here? I would have expected a huge number of
simultaneous backfills. Backfill reservation logjam?
Mvh.
Torkil
--
Torkil Svensgaard
Systems Administrator
Danish Research Centre for Magnetic Resonance DRCMR, Section 714
Copenhagen University Hospital Amager and Hvidovre
Kettegaard Allé 30, 2650 Hvidovre, Denmark
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Torkil Svensgaard
Systems Administrator
Danish Research Centre for Magnetic Resonance DRCMR, Section 714
Copenhagen University Hospital Amager and Hvidovre
Kettegaard Allé 30, 2650 Hvidovre, Denmark
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx