Garbage Collection on Luminous

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I have a production cluster and it is experiencing lot of DELETES since
many months. However, with the default gc configs - I did not see the
cluster space utilization going down. Moreover, the gc list has more than 4
million objects. I tried increasing the gc configs on 4 rados gateways and
fired manual gc using the command:

*sudo radosgw-admin gc process*


The changed GC configs are as follows:

rgw gc max objs = 100
rgw gc obj min wait = 3600
rgw gc processor max time = 900
rgw gc processor period = 3600
rgw gc max concurrent io = 20
rgw gc max trim chunk = 64


*Questions*

1. Since the cluster is active cluster and having both PUT and DELETE
requests - I am unable to determine whether manual GC is indeed helping. I
did see an increase in free space one day - but from the past few days I do
not see any increase in free space. How do I determine whether GC is really
running?


*sudo radosgw-admin gc list | grep oid | wc -l *

This command the list is always increasing and I do see very old objects
when I grep for oid.


2. Does GC have some problems running automatically on Luminous? It is
enabled.

3. Are the above configs fine or can they be made more aggressive?

4. Is there a faster way to claim space which is pending GC without
impacting much performance?


Thanks and Regards,

Priya

-- 


*-----------------------------------------------------------------------------------------*

*This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the 
system manager. This message contains confidential information and is 
intended only for the individual named. If you are not the named addressee, 
you should not disseminate, distribute or copy this email. Please notify 
the sender immediately by email if you have received this email by mistake 
and delete this email from your system. If you are not the intended 
recipient, you are notified that disclosing, copying, distributing or 
taking any action in reliance on the contents of this information is 
strictly prohibited.*****

 ****

*Any views or opinions presented in this 
email are solely those of the author and do not necessarily represent those 
of the organization. Any information on shares, debentures or similar 
instruments, recommended product pricing, valuations and the like are for 
information purposes only. It is not meant to be an instruction or 
recommendation, as the case may be, to buy or to sell securities, products, 
services nor an offer to buy or sell securities, products or services 
unless specifically stated to be so on behalf of the Flipkart group. 
Employees of the Flipkart group of companies are expressly required not to 
make defamatory statements and not to infringe or authorise any 
infringement of copyright or any other legal right by email communications. 
Any such communication is contrary to organizational policy and outside the 
scope of the employment of the individual concerned. The organization will 
not accept any liability in respect of such communication, and the employee 
responsible will be personally liable for any damages or other liability 
arising.*****

 ****

*Our organization accepts no liability for the 
content of this email, or for the consequences of any actions taken on the 
basis of the information *provided,* unless that information is 
subsequently confirmed in writing. If you are not the intended recipient, 
you are notified that disclosing, copying, distributing or taking any 
action in reliance on the contents of this information is strictly 
prohibited.*

_-----------------------------------------------------------------------------------------_
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux