Re: ceph stuck removing image from trash

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I haven't done much with rbd trash yet but you probably should still see rbd_data.43def5e07bf47 objects in that pool, correct? What if you deleted those objects in a for loop to "help purge"? I'm not sure if that would work, though.


Zitat von Anthony D'Atri <anthony.datri@xxxxxxxxx>:

Perhaps setting the object-map feature on the image, and/or running rbd object-map rebuild? Though I suspect that might perform an equivalent process and take just as long?

On Dec 15, 2020, at 11:49 PM, 胡 玮文 <huww98@xxxxxxxxxxx> wrote:

Hi Andre,

I once faced the same problem. It turns out that ceph need to scan every object in the image when deleting it, if object map is not enabled. This will take years on such a huge image. I ended up deleted the whole pool to get rid of the huge image.

Maybe you can scan all the objects in the pool and remove all objects relevant to this image manually, but I don’t know how.

在 2020年12月16日,15:07,Andre Gebers <andre.gebers@xxxxxxxxxxxx> 写道:

Hi,

I'm running a 15.2.4 test cluster in a rook-ceph environment. The cluster is reporting HEALTH_OK but it seems it is stuck removing an image. Last section of 'ceph status' output:

progress:
  Removing image replicapool/43def5e07bf47 from trash (6h)
    [............................] (remaining: 32y)

This is now going for a couple of weeks and I was wondering if there is a way to speed it up? The cluster doesn't seem to be doing much judging from the system load.

I've created this largish image to test what is possible with the setup but how do I get it out of the trash now?

# rbd info --image-id 43def5e07bf47 -p replicapool
rbd image 'csi-vol-cfaa1b00-1711-11eb-b9c9-2aa51e1e24e5':
      size 1 EiB in 274877906944 objects
      order 22 (4 MiB objects)
      snapshot_count: 0
      id: 43def5e07bf47
      block_name_prefix: rbd_data.43def5e07bf47
      format: 2
      features: layering
      op_features:
      flags:
      create_timestamp: Sun Oct 25 22:31:23 2020
      access_timestamp: Sun Oct 25 22:31:23 2020
      modify_timestamp: Sun Oct 25 22:31:23 2020

Any pointers how to resolve this issue are much appreciated.

Regards
Andre
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux