Re: [ceph-large] radosgw index pool too big

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

we also faced same situation as index pool getting full in Luminous(12.2.3) release. For us we have enabled bucket resharding which was running in a loop and filled up all the index osds and not deleting old reshard entries. 
As a resolution, we have disabled bucket resharding to arrest the problem and compiled latest Luminous code(12.2.11, which not released yet) and deleted all the old reshard entries. New command options were added to delete/purge the old reshard entries. After this step ran manual compaction on all the osds to reduce the read latencies on BlueFS. 

Regards,
Varada

On Wed, Jan 9, 2019 at 4:17 PM Thomas Bennett <thomas@xxxxxxxxx> wrote:
Hi Wido,

Thanks for your reply.
 
Are you storing a lot of (small) objects in the buckets?

No. All objects are < 4MB, around 10MB.
 
How much real data is there in the buckets data pool?

Only 7% used - 0.4 PB.
 
With 51 PGs on the NVMe you are on the low side, you will want to have
this hovering around 150 or even 200 on NVMe drives to get the best
performance.

Thanks. Do you think this also relates to the large omaps?

Cheers,
Tom
_______________________________________________
Ceph-large mailing list
Ceph-large@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-large-ceph.com
_______________________________________________
Ceph-large mailing list
Ceph-large@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-large-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFS]

  Powered by Linux