RGW Index rapidly expanding post tunables update (12.2.5)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

 

We have recently upgraded from Jewel (10.2.10) to Luminous (12.2.5) and after this we decided to update our tunables configuration to the optimals, which were previously at Firefly. During this process, we have noticed the OSDs (bluestore) rapidly filling on the RGW index and GC pool. We estimated the index to consume around 30G of space and the GC negligible, but they are now filling all 4 OSDs per host which contain 2TB SSDs in each.

 

Does anyone have any experience with this, or how to determine why the sudden growth has been encountered during recovery after the tunables update?

 

We have disabled resharding activity due to this issue, https://tracker.ceph.com/issues/24551 and our gc queue is only a few items at present.

 

Kind Regards,

 

Tom




NOTICE AND DISCLAIMER
This e-mail (including any attachments) is intended for the above-named person(s). If you are not the intended recipient, notify the sender immediately, delete this email from your system and do not disclose or use for any purpose. We may monitor all incoming and outgoing emails in line with current legislation. We have taken steps to ensure that this email and attachments are free from any virus, but it remains your responsibility to ensure that viruses do not adversely affect you
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux