Hi,
We are designing ceph+rgw setup for constant uniform high load.
We prefer higher throughput than lower latency so seems that we do not
need asynchronous features, especially garbage collection.
Currently we observing issue that after amount of time rgw's gc becoming
very very slow (removing 1 rados object per second for example)
on one rgw, than on another and so forth,
but while s3 delete operations proceeding completes fast clients
continue creating and removing objects,
then pool quotas starting overflowing.
Partially this issue can be eliminated by running 10-th of additional
'radosgw-admin gc process',
but we assume that is workaround but not common approach.
Profiling on our load model shows that RGWGC::process() take about
30%-40% of CPU,
and while seems that CPU consuming level is ok, in real time it took
much more.
For example deleting buckets with radosgw-admin with --bypass-gc option
and without --inconsistent-index option
takes 3 times less real time than deleting all objects via s3 and wait
while gc finish his work
(with rgw_gc_processor_period value much less than total time of deletion).
I've used code from rgw_remove_bucket_bypass_gc() in RGWDeleteObj::execute()
instead of calling RGWRados::Object::Delete::delete_obj()
and measured time consumed by RGWDeleteObj::execute() + RGWGC::process()
on the same tasks and without gc consumed time is 3 times lower.
At first look seems that bucket index stay consistent without gc.
Please, point me where I can get answers for this questions:
1) rgw_remove_bucket_bypass_gc() is called from radosgw-admin.
Is it safe to call it from rgw itself?
2) rgw_remove_bucket_bypass_gc() uses librados::IoCtx::aio_operate(),
while RGWGC::process() uses librados::IoCtx::operate().
So maybe making gc multithread can speed-up it, is there any
constraints that prevents make gc multithreaded?
Thanks in advance,
Aleksei Gutikov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com