Re: [ceph-large] radosgw index pool too big

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


Awesome, thanks!

Is there an email thread that I can follow somewhere?


On Wed, 09 Jan 2019 at 13:26, Varada Kari (System Engineer) <varadaraja.kari@xxxxxxxxxxxx> wrote:

we also faced same situation as index pool getting full in Luminous(12.2.3) release. For us we have enabled bucket resharding which was running in a loop and filled up all the index osds and not deleting old reshard entries. 
As a resolution, we have disabled bucket resharding to arrest the problem and compiled latest Luminous code(12.2.11, which not released yet) and deleted all the old reshard entries. New command options were added to delete/purge the old reshard entries. After this step ran manual compaction on all the osds to reduce the read latencies on BlueFS. 


On Wed, Jan 9, 2019 at 4:17 PM Thomas Bennett <thomas@xxxxxxxxx> wrote:
Hi Wido,

Thanks for your reply.
Are you storing a lot of (small) objects in the buckets?

No. All objects are < 4MB, around 10MB.
How much real data is there in the buckets data pool?

Only 7% used - 0.4 PB.
With 51 PGs on the NVMe you are on the low side, you will want to have
this hovering around 150 or even 200 on NVMe drives to get the best

Thanks. Do you think this also relates to the large omaps?

Ceph-large mailing list
Thomas Bennett

Science Data Processing
Ceph-large mailing list

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFS]

  Powered by Linux