Re: bucket cleanup speed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Hi All again..

At the rate below, its going to take about ~20 days to purge all the data in this bucket.

There has to be a quicker way to do this?

Running: radosgw-admin bucket rm --bucket=backup01 --purge-objects --yes-i-really-mean-it

Bucket Data.
   "num_objects": 14872255

obj01:/etc/ceph# while true; do num=`radosgw-admin bucket stats --bucket=backup01 | jshon | grep num_objects | tail -1 | cut -d" " -f5`; sleep 60; num2=`radosgw-admin bucket stats --bucket=backup01 | jshon | grep num_objects | tail -1 | cut -d" " -f5`; diff=`expr $num - $num2`; echo $diff ; done
595
607
606



On 2014-11-16 11:15, Daniel Hoffman wrote:
We have managed to get it running with below.

  rgw gc max objs = 7877
  rgw gc processor period = 600

We now have a higher IOPs number on the GC pool in the dashboard. We
are not sure if its making a huge difference yet but we will monitor.

If we went higher than above we saw crashing


On 2014-11-16 03:21, Yehuda Sadeh wrote:
Also, if that doesn't help, look at the following configurables:

config_opts.h:OPTION(rgw_gc_processor_max_time, OPT_INT, 3600)  //
total run time for a single gc processor work
config_opts.h:OPTION(rgw_gc_processor_period, OPT_INT, 3600)  // gc
processor cycle time

You may want to reduce the gc cycle time (and match the total run time also).

Yehuda


On Sat, Nov 15, 2014 at 3:23 AM, Daniel Hoffman <daniel@xxxxxxxxxx> wrote:
Thanks.

We have that set

  rgw gc max objs = 997

The problem we have is we have Commvault connected to the cluster. What commvault does is every sunday does a full backup of an entire network (around 40TB) then ages out the old data, which is basically deleting all the old objects. Then all weel commvault makes incrimental backups and then at the end of the week does the same full backup and then purge of old data.
Simple and standard backup.

The problem is, that over the space of a week, 80TB gets put into the cloud and should also be purged deleted. We are not seeing the delete complete in an entire week, so we are basically slowly running out of space as the data
is never cleaned up in time.

Any thoughts anyone?

I might try a larger prime number in like the 7919 range...



On 2014-11-15 20:42, Jean-Charles LOPEZ wrote:

Hi,

this is an old thing I remember of and it may not be exactly related
but just in case, let’s try it out.

Just verify the value of the following parameter rgw_gc_max_objs in
the RGW configuration via the admin socket (cep daemon {row.id} config
get rgw_gc_max_objs
.
If the value is 32, update the ceph.conf file for your RGW and set it
to 37 or any higher prime number such as 997 and restart your RGW.
Make sure you do the restart before your next big batch of object
removal. It will take some time to clear the overloaded buckets, may
be, but it should at least distribute the new ones evenly onto all gc
buckets after this and make the removal of the newly deleted objects
way quicker.

Keep us posted to tell us if it has improved anything.

JC




On Nov 14, 2014, at 01:20, Daniel Hoffman <daniel@xxxxxxxxxx> wrote:

Hi All.

Running a Ceph Cluster (firefly) ceph version 0.80.5

We use ceph mainly for backups via the radosGW at the moment.

There had to be an account deleted/bucket removed which had a very large
number of objects and was about 60TB in space.

We have been monitoring it for days now, and the data is purging but very very slowly. We are actually putting new backups in much faster than the old
data is being removed.

As we are doing a lot of work with backups and aging out data we need to
find a way to improve the cleanup process.

Is there a way to improve the purge/clean performance?
Is the clean/purge performance impacted by disk thread ioprio class
setting?

Any advice or tunables to improve the removal of data is appreciated.

Thanks

Daniel

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux