Re: Ghost usage on pool and unable to reclaim free space.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Khodayar

That rados purge command did not actually seem to delete anything. It is
still using 90GiB like before. I have 4 pools on these OSD's. But
the "fast" pool still has 90GB but should be empty.

Looking at the PG's in `ceph pg ls` LOG is using very little space on the
pool. Are there any other logs ?

# ceph osd pool ls detail
pool 42 'cephfs_metadata' replicated size 2 min_size 1 crush_rule 0
object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change
186707 lfor 0/124366/185624 flags hashpspool stripe_width 0 application
cephfs
pool 51 'rbd' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 186709 lfor
0/186698/186696 flags hashpspool,selfmanaged_snaps stripe_width 0
compression_algorithm lz4 compression_mode aggressive
compression_required_ratio 0.9 application rbd
       removed_snaps [1~9]
pool 53 'kube' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 32 pgp_num 32 autoscale_mode warn last_change 186711 lfor
0/185831/185829 flags hashpspool,selfmanaged_snaps stripe_width 0
application rbd
       removed_snaps [1~3]
pool 54 'fast' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 186713 flags
hashpspool,selfmanaged_snaps stripe_width 0 compression_algorithm lz4
compression_mode aggressive compression_required_ratio 0.9 application
cephfs
       removed_snaps [52~2,55~2,59~2]

# ceph osd df tree
ID  CLASS WEIGHT    REWEIGHT SIZE    RAW USE DATA    OMAP    META     AVAIL
  %USE  VAR  PGS STATUS TYPE NAME
175  nvme   1.81898  1.00000 1.8 TiB 361 GiB 342 GiB 9.7 GiB  9.8 GiB 1.5
TiB 19.38 0.23 160     up         osd.175
176  nvme   1.81898  1.00000 1.8 TiB 361 GiB 342 GiB 9.5 GiB  9.6 GiB 1.5
TiB 19.36 0.23 160     up         osd.176

# ceph df
POOLS:
   POOL                ID     STORED      OBJECTS     USED        %USED
    MAX AVAIL     QUOTA OBJECTS     QUOTA BYTES     DIRTY       USED COMPR
    UNDER COMPR
   cephfs_metadata     42     320 MiB       2.82M     320 MiB      0.01
      1.4 TiB     N/A               N/A               2.82M            0 B
            0 B
   rbd                 51     417 GiB     106.93k     417 GiB     12.32
      1.4 TiB     N/A               N/A             106.93k         83 GiB
        167 GiB
   kube                53        38 B           3        38 B         0
      1.4 TiB     N/A               N/A                   3            0 B
            0 B
   fast                54      90 GiB       5.25M      90 GiB      2.93
      1.4 TiB     N/A               N/A               5.25M            0 B
            0 B

# ceph pg ls
PG     OBJECTS DEGRADED MISPLACED UNFOUND BYTES        OMAP_BYTES*
OMAP_KEYS* LOG  STATE        SINCE VERSION         REPORTED         UP...
...
54.3a    81396        0         0       0   1512202599           0
         0 3020           active+clean    9m   186704'339296
   186722:615571
54.3b    82612        0         0       0   1556478139           0
         0 3073           active+clean    9m   186706'340432
   186722:609005
54.3c    82526        0         0       0   1553816829           0
         0 3031           active+clean    9m   186704'806269
  186722:1175811
54.3d    82144        0         0       0   1454278881           0
         0 3016           active+clean    9m   186706'339167
   186722:743880
54.3e    82740        0         0       0   1501081654           0
         0 3056           active+clean    9m   186704'343212
   186722:621323
54.3f    80846        0         0       0   1474357106           0
         0 3048           active+clean    9m   186706'333741
   186722:733149

On Sun, Apr 19, 2020 at 4:00 AM Khodayar Doustar <doustar@xxxxxxxxxxxx>
wrote:

> Hi Kári,
>
> You are purging only 2.6M objects with your purge command, so I guess that
> 5.25M objects would be something else like logs.
> Have you checked the osd df detail and osd metadata?
> I have a case which osds bluestore logs are eating my whole space up,
> maybe you are facing a similar one.
>
> Regards,
> Khodayar
>
> On Sun, Apr 19, 2020 at 6:47 AM Kári Bertilsson <karibertils@xxxxxxxxx>
> wrote:
>
>> Hello
>>
>> Running ceph v14.2.8 on everything. Pool is using replicated_rule with
>> size/min_size 2 with 2 OSD's. I have scrubbed and deep scrubbed the OSDs.
>>
>> This pool was attached as data pool to cephfs containing alot of small
>> files. I have since removed all files and detached the pool from the fs.
>> Somehow the pool is still using 90GB.
>>
>> POOLS:
>>    POOL                ID     STORED      OBJECTS     USED        %USED
>>     MAX AVAIL     QUOTA OBJECTS     QUOTA BYTES     DIRTY       USED COMPR
>>     UNDER COMPR
>>    fast                54      90 GiB       5.25M      90 GiB      2.93
>>       1.4 TiB     N/A               N/A               5.25M            0 B
>>             0 B
>>
>> # rados -p fast ls|wc -l
>> 1312030
>>
>> # rados -p fast stat 100073acce3.00000000
>> error stat-ing fast/100073acce3.00000000: (2) No such file or directory
>>
>> # rados -p fast rm 100073acce3.00000000
>> error removing fast>100073acce3.00000000: (2) No such file or directory
>>
>> # rados -p fast get 100073acce3.00000000 test
>> error getting fast/100073acce3.00000000: (2) No such file or directory
>>
>> I get the same error for every single object in the pool
>>
>> # rados purge fast --yes-i-really-really-mean-it
>> Warning: using slow linear search
>> Removed 2625749 objects
>> successfully purged pool fast
>>
>> There was 5.25M objects in the pool before and after running this command.
>> No change in ceph df.
>>
>> Any ideas how to reclaim the free space ? I can remove & recreate the
>> pool,
>> but i would like to know why and how to deal with this situation when i
>> don't have that privilege
>>
>> Best regards
>> Kári Bertilsson
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux