Re: Large OMAP object in RGW GC pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Wido,

Interleaving below....

On 6/11/19 3:10 AM, Wido den Hollander wrote:
> 
> I thought it was resolved, but it isn't.
> 
> I counted all the OMAP values for the GC objects and I got back:
> 
> gc.0: 0
> gc.11: 0
> gc.14: 0
> gc.15: 0
> gc.16: 0
> gc.18: 0
> gc.19: 0
> gc.1: 0
> gc.20: 0
> gc.21: 0
> gc.22: 0
> gc.23: 0
> gc.24: 0
> gc.25: 0
> gc.27: 0
> gc.29: 0
> gc.2: 0
> gc.30: 0
> gc.3: 0
> gc.4: 0
> gc.5: 0
> gc.6: 0
> gc.7: 0
> gc.8: 0
> gc.9: 0
> gc.13: 110996
> gc.10: 111104
> gc.26: 111142
> gc.28: 111292
> gc.17: 111314
> gc.12: 111534
> gc.31: 111956

Casey Bodley mentioned to me that he's seen similar behavior to what
you're describing when RGWs are upgraded but not all OSDs are upgraded
as well. Is it possible that the OSDs hosting gc.13, gc.10, and so forth
are running a different version of ceph?

Eric

-- 
J. Eric Ivancich
he/him/his
Red Hat Storage
Ann Arbor, Michigan, USA
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux