No one had this problem? I found a forum/mail list post from 2013 with
the same issue but no responses either.
Any pointers appreciated.
Daniel
On 2014-11-14 20:20, Daniel Hoffman wrote:
Hi All.
Running a Ceph Cluster (firefly) ceph version 0.80.5
We use ceph mainly for backups via the radosGW at the moment.
There had to be an account deleted/bucket removed which had a very
large number of objects and was about 60TB in space.
We have been monitoring it for days now, and the data is purging but
very very slowly. We are actually putting new backups in much faster
than the old data is being removed.
As we are doing a lot of work with backups and aging out data we need
to find a way to improve the cleanup process.
Is there a way to improve the purge/clean performance?
Is the clean/purge performance impacted by disk thread ioprio class
setting?
Any advice or tunables to improve the removal of data is appreciated.
Thanks
Daniel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com