Re: Deleting millions of objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Since 1000 is the hard coded limit in AWS, maybe you need to set something on the client as well? "client.rgw" should work for setting the config in RGW.

Daniel

On 5/18/23 03:01, Rok Jaklič wrote:
Thx for the input.

I tried several config sets e.g.:
ceph config set client.radosgw.mon2 rgw_delete_multi_obj_max_num 10000
ceph config set client.radosgw.mon1 rgw_delete_multi_obj_max_num 10000
ceph config set client.rgw rgw_delete_multi_obj_max_num 10000

where client.radosgw.mon2 is the same as in ceph.conf but without success.

It also seems from
https://github.com/ceph/ceph/blob/8c4f52415bddba65e654f3a4f7ba37d98446d202/src/rgw/rgw_op.cc#L7131
that it should check config setting, but for some reason it is not working.

---

For now I ended up with spawning up to 100 background processes (more than
that it fills up our FE queue and we get response timeouts) with:
mc rm --recursive --force ceph/archive/veeam &

Regards,
Rok

On Thu, May 18, 2023 at 3:47 AM Szabo, Istvan (Agoda) <
Istvan.Szabo@xxxxxxxxx> wrote:

If it works I’d be amazed. We have this slow and limited delete issue
also. What we’ve done to run on the same bucket multiple delete from
multiple servers via s3cmd.

Istvan Szabo
Staff Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

On 2023. May 17., at 20:14, Joachim Kraftmayer - ceph ambassador <
joachim.kraftmayer@xxxxxxxxx> wrote:

Email received from the internet. If in doubt, don't click any link nor
open any attachment !
________________________________

Hi Rok,

try this:


rgw_delete_multi_obj_max_num - Max number of objects in a single
multi-object delete request
  (int, advanced)
  Default: 1000
  Can update at runtime: true
  Services: [rgw]


config set <who> <name> <value>


WHO: client.<rgw-name> or client.rgw

KEY: rgw_delete_multi_obj_max_num

VALUE: 10000

Regards, Joachim

___________________________________
ceph ambassador DACH
ceph consultant since 2012

Clyso GmbH - Premier Ceph Foundation Member

https://www.clyso.com/

Am 17.05.23 um 14:24 schrieb Rok Jaklič:

thx.


I tried with:

ceph config set mon rgw_delete_multi_obj_max_num 10000

ceph config set client rgw_delete_multi_obj_max_num 10000

ceph config set global rgw_delete_multi_obj_max_num 10000


but still only 1000 objects get deleted.


Is the target something different?


On Wed, May 17, 2023 at 11:58 AM Robert Hish <robert.hish@xxxxxxxxxxxx>

wrote:


I think this is capped at 1000 by the config setting. Ive used the aws

and s3cmd clients to delete more than 1000 objects at a time and it

works even with the config setting capped at 1000. But it is a bit slow.


#> ceph config help rgw_delete_multi_obj_max_num


rgw_delete_multi_obj_max_num - Max number of objects in a single multi-

object delete request

   (int, advanced)

   Default: 1000

   Can update at runtime: true

   Services: [rgw]


On Wed, 2023-05-17 at 10:51 +0200, Rok Jaklič wrote:

Hi,


I would like to delete millions of objects in RGW instance with:

mc rm --recursive --force ceph/archive/veeam


but it seems it allows only 1000 (or 1002 exactly) removals per

command.


How can I delete/remove all objects with some prefix?


Kind regards,

Rok

_______________________________________________

ceph-users mailing list -- ceph-users@xxxxxxx

To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________

ceph-users mailing list -- ceph-users@xxxxxxx

To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________

ceph-users mailing list -- ceph-users@xxxxxxx

To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


------------------------------
This message is confidential and is for the sole use of the intended
recipient(s). It may also be privileged or otherwise protected by copyright
or other legal rules. If you have received it by mistake please let us know
by reply email and delete it from your system. It is prohibited to copy
this message or disclose its content to anyone. Any confidentiality or
privilege is not waived or lost by any mistaken delivery or unauthorized
disclosure of the message. All messages sent to and from Agoda may be
monitored to ensure compliance with company policies, to protect the
company's interests and to remove potential malware. Electronic messages
may be intercepted, amended, lost or deleted, or contain viruses.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux