Re: CRUSH rule seems to work fine not for all PGs in erasure coded pools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi David, thanks for quick feedback.

Then why some PGs were remapped and some were not?

# LOOKS THAT 338 PGs IN ERASURE CODED POOLS HAVE BEEN REMAPPED
# I DONT GET WHY 540 PGs STILL ENCOUNTER active+undersized+degraded STATE
root at host01:~# ceph pg dump pgs_brief  |grep 'active+remapped'
dumped pgs_brief in format plain
16.6f active+remapped [43,2147483647,2,31,12] 43 [43,33,2,31,12] 43
16.6e active+remapped [10,5,35,44,2147483647] 10 [10,5,35,44,41] 10
....
root at host01:~# egrep '16.6f|16.6e' PGs_on_HOST_host05
16.6f active+clean [43,33,2,59,12] 43 [43,33,2,59,12] 43
16.6e active+clean [10,5,49,35,41] 10 [10,5,49,35,41] 10
root at host01:~#
like PG 16.6f, prior to ceph services stop it was on [43,33,2,59,12] then was remapped to [43,33,2,31,12], so OSD@31 and OSD@33 are on the same HOST.
But for example PG 16.ee get to active+undersized+degraded state, prior to services stop it was on
pg_stat state up up_primary acting acting_primary 
16.ee active+clean [5,22,33,55,45] 5 [5,22,33,55,45] 5
after the stop of services on the host it was not remapped 
16.ee	active+undersized+degraded	[5,22,33,2147483647,45]	5	[5,22,33,2147483647,45]	5
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux