Re: Accumulation of removed_snaps_queue After Deleting Snapshots in Ceph RBD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 24   active+clean+snaptrim

I see snaptrimming happening in your status output - do you know if
that was happening before restarting those OSDs? This is the mechanism
by which OSDs clean up deleted snapshots, and once all OSDs have
completed snaptrim for a given snapshot it should be removed from the
removed_snaps_queue.

> ceph version 16.2.7

You may want to consider upgrading. 16.2.8 has a fix for
https://tracker.ceph.com/issues/52026, which is an issue that can
cause snaptrim to not happen in some circumstances.

Josh

On Mon, Feb 12, 2024 at 6:00 PM localhost Liam <imluyuan@xxxxxxxxx> wrote:
>
> Thanks, we are storing a lot less stress.
> 0. I rebooted 30 OSDs on one machine and the queue was not reduced, but the storage space was released in large amounts.
> 1. why did the reboot OSD release so much space?
>
>
> Here are Ceph details..
>
> ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)
>
>   cluster:
>     id:     9acc3734-b27b-4bc3-84b8-c7762f2294c6
>     health: HEALTH_OK
>
>   services:
>     mon: 3 daemons, quorum onf-akl-stor001,onf-akl-stor002,onf-akl-stor003 (age
> 11d)
>     mgr: onf-akl-stor001(active, since 3M), standbys: onf-akl-stor002
>     osd: 101 osds: 98 up (since 41s), 98 in (since 11d)
>     rgw: 2 daemons active (2 hosts, 1 zones)
>
>   data:
>     pools:   7 pools, 2209 pgs
>     objects: 25.47M objects, 58 TiB
>     usage:   115 TiB used, 184 TiB / 299 TiB avail
>     pgs:     2183 active+clean
>                 24   active+clean+snaptrim
>                 2    active+clean+scrubbing+deep
>
>   io:
>     client:   38 MiB/s rd, 226 MiB/s wr, 1.32k op/s rd, 2.27k op/s wr
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux