Hi Ceph users,
I found that some pg are inactive after I added some osds and PGs.
ceph pg dump_stuck inactive:
PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY
10.9b undersized+degraded+remapped+backfilling+peered [8,9] 8 [3] 3
10.167 undersized+degraded+remapped+backfilling+peered [2,0] 2 [3] 3
10.1c3 undersized+degraded+remapped+backfilling+peered [9,5] 9 [1] 1
10.15c undersized+degraded+remapped+backfill_wait+peered [0,2] 0 [6] 6
10.187 undersized+degraded+remapped+backfill_wait+peered [9,5] 9 [6] 6
10.1bb undersized+degraded+remapped+backfilling+peered [0,3] 0 [3] 3
10.1f7 undersized+degraded+remapped+backfilling+peered [2,1] 2 [0] 0
10.87 undersized+degraded+remapped+backfill_wait+peered [0,3] 0 [6] 6
10.1ae undersized+degraded+remapped+backfilling+peered [8,3] 8 [0] 0
10.e2 undersized+degraded+remapped+backfilling+peered [5,8] 5 [1] 1
10.17e undersized+degraded+remapped+backfill_wait+peered [1,3] 1 [6] 6
10.11c undersized+degraded+remapped+backfilling+peered [5,3] 5 [1] 1
10.1d2 undersized+degraded+remapped+backfill_wait+peered [5,2] 5 [9] 9
10.13d undersized+degraded+remapped+backfilling+peered [3,1] 3 [0] 0
10.1a2 undersized+degraded+remapped+backfilling+peered [5,1] 5 [1] 1
10.153 undersized+degraded+remapped+backfilling+peered [1,8] 1 [0] 0
10.13c undersized+degraded+remapped+backfilling+peered [5,9] 5 [0] 0
10.133 undersized+degraded+remapped+backfilling+peered [6,5] 6 [8] 8
10.dc undersized+degraded+remapped+backfill_wait+peered [8,9] 8 [6] 6
10.1ef undersized+degraded+remapped+backfilling+peered [0,1] 0 [3] 3
10.123 undersized+degraded+remapped+backfill_wait+peered [5,8] 5 [8] 8
8.36 remapped+peering [8,2,3] 8 [8,15] 8
10.47 undersized+degraded+remapped+backfilling+peered [1,16] 1 [1] 1
According to my understanding, undersized+degraded+remapped+backfilling means pg are lack of enough replication but at least one copy is still on some osd. Shouldn't ceph be able to serve this pg from that OSD, while it is backfilling it? Or is there some thing that I need to do to activate these pg?
This particular pool is replicated with size=2.
Thanks!
Zhan
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com