Re: What does 'removed_snaps_queue' [d5~3] means?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



It would be helpful to know what exactly happened. Who creates the snapshots and how? What are your clients, openstack compute nodes? If an 'rbd ls' shows some output, does 'rbd status <pool>/<image>' display any info as well or does it return an error? This is a reoccuring issue if client connections break, at least something similar is reported once in a while in the openstack discuss mailing list. Sometimes a compute node reboot helps to get rid of the lock, or a "blocklist" of the client. But without more details it's difficult to say what exactly the issue is and what would help.

Zitat von Work Ceph <work.ceph.user.mailing@xxxxxxxxx>:

I see, thanks for the reply!

BTW, while the snapshots are not removed yet, should we be able to delete
the image that had the snapshots being deleted?

We noticed the following for the images that had snapshots deleted, but not
actually removed from the system:
```
This means the image is still open or the client using it crashed. Try
again after closing/unmapping it or waiting 30s for the crashed client to
timeout.
```

After executing the "rbd rm", we receive that message, and the image still
displays with the "rbd ls" command. However, the "rbd info" returns a
message saying that the image does not exist. Is that a known
issue/situation?

On Sat, Aug 26, 2023 at 5:24 AM Eugen Block <eblock@xxxxxx> wrote:

Hi,

that specifies a range of (to be) removed snapshots. Do you have rbd
mirroring configured or some scripted snapshot creation/deletion?
Snapshot deletion is an asynchronous operation, so they are added to
the queue and deleted at some point. Does the status/range change?
Which exact Octopus version are you running? I have two test clusters
(latest Octopus) with rbd mirroring and when I set that up I expected
to see something similar, in earlier Ceph versions that was visible in
the pool ls detail output. Anyway, I wouldn't worry about it as long
as the queue doesn't grow and the snaps are removed eventually. You
should see the snaptrimming in the 'ceph -s' output as well, the PGs
have a respective state (active+snaptrim or active+snaptrim_wait). I
write this from memory, so the PG state might differ a bit.
You just need to be aware of the impacts of many snapshots for many
images, I'm still investigating a customer issue, some of the results
I posted in this list [1].

Regards,
Eugen

[1]

https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/ZEMGKBLMEREBZB7SWOLDA6QZX3S7FLL3/#YAHVTTES6YU5IXZJ2UNXKURXSHM5HDEX

Zitat von Work Ceph <work.ceph.user.mailing@xxxxxxxxx>:

> Hello guys,
> We are facing/seeing an unexpected mark in one of our pools. Do you guys
> know what does "removed_snaps_queue" it mean? We see some notation such
as
> "d5~3" after this tag. What does it mean? We tried to look into the docs,
> but could not find anything meaningful.
>
> We are running Ceph Octopus on top of Ubuntu 18.04.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux