Cannot remove bucket due to missing placement rule

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I created a new placement target/pool. I don't have the exact commands anymore, but something similar to:
---
$ radosgw-admin zonegroup placement add \
      --rgw-zonegroup default \
      --placement-id temporary

$ radosgw-admin zone placement add \
      --rgw-zone default \
      --placement-id temporary \
      --data-pool default.rgw.temporary.data \
      --index-pool default.rgw.temporary.index \
      --data-extra-pool default.rgw.temporary.non-ec
---


I created a new bucket to use the "default.rgw.temporary.data" as its data pool.

Without removing the bucket, I removed the placement target/rule.

I realised that I still had a bucket, so tried to remove the bucket:
---
$ radosgw-admin bucket rm --bucket=tempbucket
2024-08-26T14:12:08.722+0200 7f6c88a73fc0 0 could not find placement rule temporary within zonegroup

2024-08-26T14:12:08.722+0200 7f6c88a73fc0 0 ERROR: int RGWRados::Bucket::List::list_objects_unordered(const DoutPrefixProvider*, int64_t, std::vector<rgw_bucket_dir_entry>*, std::map<std::__cxx11::basic_string<char>, bool>*, bool*, optional_yield) cls_bucket_list_unordered returned -22 for :tempbucket[d9c26db8-925f-4c6c-838d-6e886ec345ca.694015388.44])
---


So I tried to remove the bucket in a different way:
---
$ radosgw-admin metadata rm bucket:tempbucket
$ radosgw-admin bucket rm  --bucket=tempbucket
---
This looked good.

I thought the bucket was removed, but my rgw containers are crashing when trying to trim (so it seems the bucket is not really removed):
---
debug -2> 2024-08-26T12:33:21.291+0000 7eff1bc70700 0 could not find placement rule temporary within zonegroup debug -1> 2024-08-26T12:33:21.291+0000 7eff1bc70700 0 ERROR: open_bucket_index_shard() returned ret=-22 debug 0> 2024-08-26T12:33:21.295+0000 7eff1bc70700 -1 *** Caught signal (Segmentation fault) **
 in thread 7eff1bc70700 thread_name:sync-log-trim

ceph version 15.2.17 (8a82819d84cf884bd39c17e3236e0632ac146dc4) octopus (stable)
 1: (()+0x12ce0) [0x7eff4530cce0]
 2: (()+0x97b01) [0x7eff4f8a6b01]
3: (librados::v14_2_0::IoCtx::aio_operate(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, librados::v14_2_0::AioCompletion*, librados::v14_2_0::ObjectWriteOperation*)+0x74) [0x7eff4f87f3b4]
 4: (RGWRadosBILogTrimCR::send_request()+0x1c7) [0x7eff5029fb07]
 5: (RGWSimpleCoroutine::state_send_request()+0x13) [0x7eff50297483]
 6: (RGWSimpleCoroutine::operate()+0xac) [0x7eff5029ca5c]
7: (RGWCoroutinesStack::operate(RGWCoroutinesEnv*)+0x67) [0x7eff5029a287] 8: (RGWCoroutinesManager::run(std::__cxx11::list<RGWCoroutinesStack*, std::allocator<RGWCoroutinesStack*> >&)+0x271) [0x7eff5029b0b1]
 9: (RGWSyncLogTrimThread::process()+0x200) [0x7eff503636d0]
 10: (RGWRadosThread::Worker::entry()+0x176) [0x7eff5032bc26]
 11: (()+0x81ca) [0x7eff453021ca]
 12: (clone()+0x43) [0x7eff43948dd3]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
---


I can recreate (if required for the fix) the metadata info, since I have the output of "radosgw-admin metadata get bucket:tempbucket > tempbucket_metadata.json" by using:
---
$ radosgw-admin metadata put bucket:tempbucket < tempbucket_metadata.json
---


What is the best way to recover from this? Recreate the placement rule? Or are there other options? I just want to get rid of the bucket, so my rgw containers don't crash while trying to trim this bucket.

Thanks!
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux