Re: Deleting a pool with data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If using the autoscaler there may be some knock-on PG splitting, but that should be throttled automatically.  Do be sure that your mon DBs are on SSDs.

> On Mar 6, 2025, at 7:26 AM, Eugen Block <eblock@xxxxxx> wrote:
> 
> Hi Rich,
> 
> I waited for other users/operators to chime in because it's been a while since we deleted a large pool last time in a customer cluster. I may misremember, so please take that with a grain of salt. But the pool deletion I am referring to was actually on Nautilus as well. In a small lab cluster I just did the same, trying to confirm my memories.
> I would not recommend to delete the objects by looping over 'rados ls'. Btw., there's a rados purge command which loops through the pool for you:
> 
> rados purge <pool-name> --yes-i-really-really-mean-it
> 
> If you delete the pool itself (ceph osd pool delete), only the OSD's DBs would have some more work to clean that up, but I believe it's the best option here. But I'd rather have that confirmed by someone else.
> 
> Regards,
> Eugen
> 
> 
> Zitat von Richard Bade <hitrich@xxxxxxxxx>:
> 
>> Hi Everyone,
>> We're reducing back down from multisite to a single rgw zone. This
>> will mean that some pools will be unused so I'd like to delete them.
>> However there are some objects and data remaining in the pool even
>> though the buckets are all deleted. It's just shadow objects. All the
>> actual data has been deleted.
>> So my question is around performance impact of deleting a pool with
>> data in it. Will ceph handle things nicely or will it try remove all
>> that data at once?
>> I'm on Nautilus 14.2.22 with all bluestore osds on spinning disks with
>> nvme db. Pools are erasure coded k=4 m=2.
>> I'm thinking it might be best to do rados ls and loop through rados rm
>> the objects to control the speed.
>> There is one mailing list thread from 7 years ago basically saying the
>> same but I was wondering if anyone else had any input around this?
>> 
>> Thanks,
>> Rich
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux