Re: Deleting a pool with data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Rich,

I agree with the general advice. From what I recall, removing a pool as a whole will trigger less load on a cluster than removing all objects in that pool.
Also, make sure you know about osd_delete_sleep [1]. It could help you regulate the PG deletion process.

Regards,
Frédéric.

[1] https://docs.ceph.com/en/reef/rados/configuration/osd-config-ref/#confval-osd_delete_sleep

----- Le 7 Mar 25, à 0:26, Richard Bade hitrich@xxxxxxxxx a écrit :

> Hi Eugen and Anthony,
> Thanks for your input, it's much appreciated.
> I had not spotted the rados purge command so I'll file that one away
> for the future.
> I agree that in this case the pool delete seems like the best option
> and I've done a test on our dev cluster with a pool of 2.5TB and a few
> hundred thousand objects. This caused only a tiny spike in our grafana
> graphs of one data point on 15sec samples.
> The mons are on SSD or nvme and we don't have the autoscaler turned on
> so I think we should be all good there.
> I expect to be deleting these pools by the end of the month so I will
> report back after I do so this thread isn't left hanging.
> 
> Thanks,
> Rich
> 
> On Fri, 7 Mar 2025 at 01:33, Eugen Block <eblock@xxxxxx> wrote:
>>
>> Hi Rich,
>>
>> I waited for other users/operators to chime in because it's been a
>> while since we deleted a large pool last time in a customer cluster. I
>> may misremember, so please take that with a grain of salt. But the
>> pool deletion I am referring to was actually on Nautilus as well. In a
>> small lab cluster I just did the same, trying to confirm my memories.
>> I would not recommend to delete the objects by looping over 'rados
>> ls'. Btw., there's a rados purge command which loops through the pool
>> for you:
>>
>> rados purge <pool-name> --yes-i-really-really-mean-it
>>
>> If you delete the pool itself (ceph osd pool delete), only the OSD's
>> DBs would have some more work to clean that up, but I believe it's the
>> best option here. But I'd rather have that confirmed by someone else.
>>
>> Regards,
>> Eugen
>>
>>
>> Zitat von Richard Bade <hitrich@xxxxxxxxx>:
>>
>> > Hi Everyone,
>> > We're reducing back down from multisite to a single rgw zone. This
>> > will mean that some pools will be unused so I'd like to delete them.
>> > However there are some objects and data remaining in the pool even
>> > though the buckets are all deleted. It's just shadow objects. All the
>> > actual data has been deleted.
>> > So my question is around performance impact of deleting a pool with
>> > data in it. Will ceph handle things nicely or will it try remove all
>> > that data at once?
>> > I'm on Nautilus 14.2.22 with all bluestore osds on spinning disks with
>> > nvme db. Pools are erasure coded k=4 m=2.
>> > I'm thinking it might be best to do rados ls and loop through rados rm
>> > the objects to control the speed.
>> > There is one mailing list thread from 7 years ago basically saying the
>> > same but I was wondering if anyone else had any input around this?
>> >
>> > Thanks,
>> > Rich
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux