Re: Accumulation of removed_snaps_queue After Deleting Snapshots in Ceph RBD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, we are storing a lot less stress.
0. I rebooted 30 OSDs on one machine and the queue was not reduced, but the storage space was released in large amounts.
1. why did the reboot OSD release so much space?


Here are Ceph details..

ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)

  cluster:
    id:     9acc3734-b27b-4bc3-84b8-c7762f2294c6
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum onf-akl-stor001,onf-akl-stor002,onf-akl-stor003 (age
11d)
    mgr: onf-akl-stor001(active, since 3M), standbys: onf-akl-stor002
    osd: 101 osds: 98 up (since 41s), 98 in (since 11d)
    rgw: 2 daemons active (2 hosts, 1 zones)

  data:
    pools:   7 pools, 2209 pgs
    objects: 25.47M objects, 58 TiB
    usage:   115 TiB used, 184 TiB / 299 TiB avail
    pgs:     2183 active+clean
                24   active+clean+snaptrim
                2    active+clean+scrubbing+deep

  io:
    client:   38 MiB/s rd, 226 MiB/s wr, 1.32k op/s rd, 2.27k op/s wr
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux