Re: CephFS: EC pool with "leftover" objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm adding to this the 'ceph osd pool force-remove-snap <pool_name>' should only be run ONCE.

Running it twice or more is useless and can lead to OSDs crashing (restarting fine, but still... crashing at the same time).

Cheers,
Frédéric.

----- Le 3 Fév 25, à 8:48, Frédéric Nass frederic.nass@xxxxxxxxxxxxxxxx a écrit :

> Hi Robert,
> 
> I'm sorry I have overlooked the 'rados rm' part in your first message. The fact
> that you and Felix reported a rados stat / rados rm issue for different uses
> cases (Cephfs and RBD+iSCSI) had me connecting the dots...
> We've seen something similar last year with rados objects we couldn't delete
> (RGW workload). We had approx. 700TB of RAW data that we couldn't get rid of...
> 
> If :
> 
> - these 242 million objects show up in the CLONES command of ceph df | grep -E
> 'CLONES|cephfs_data_ec' output
> AND
> - for these rados objects you only see a snap_id and NO head object
> 
> $ rados -p cephfs_data_ec listsnaps 1001275b3fe.00000241
> 1001275b3fe.00000241:
> cloneid	snaps	size	overlap
> 3	3	4194304	[]
> head	-	4194304              <--- you don't see this line
> 
> Then you may be facing this bug [1] related to orphan clones AKA leaked
> snapshots, that was fixed by [2].
> 
> To get rid of these orphan clones, you need run to run the below command on the
> pool. See [3] for details.
> 
> $ ceph osd pool force-remove-snap cephfs_data_ec
> 
> Note that there's a --dry-run option you can use.
> 
> Let us know how it goes.
> 
> Cheers,
> Frédéric.
> 
> [1] https://tracker.ceph.com/issues/64646
> [2] https://github.com/ceph/ceph/pull/55841
> [3] https://github.com/ceph/ceph/pull/53545
> 
> ----- Le 29 Jan 25, à 18:09, Frédéric Nass frederic.nass@xxxxxxxxxxxxxxxx a
> écrit :
> 
>> Hi Robert,
>> 
>> How did you move these files?
>> 
>> The 'mv' operation on existing files in the cephfs tree will ** not ** move
>> rados objects between pools. Only new files created in a folder backed by a
>> specific pool will fall into that pool. There may be a lock of documentation
>> here.
>> 
>> Please read this blog post [1] before taking any actions. It provides a python
>> script [2] that helps moving existing files between rados pools.
>> 
>> Regards,
>> Frédéric.
>> 
>> [1] https://ewal.dev/cephfs-migrating-files-between-pools
>> [2] https://gist.github.com/ervwalter/5ff6632c930c27a1eb6b07c986d7439b
>> 
>> ----- Le 24 Jan 25, à 14:09, Robert Sander r.sander@xxxxxxxxxxxxxxxxxxx a écrit
>> :
>> 
>>> Hi,
>>> 
>>> there is an old cluster (9 years) that gets constantly upgraded and is
>>> currently running version 17.2.7.
>>> 
>>> 3 years ago (when running version 16) a new EC pool was added to the
>>> existing CephFS to be used with the directory layout feature.
>>> 
>>> Now it was decided to remove that pool again. On the filesystem level
>>> all files have been copied to the original replicated pool and then
>>> deleted. No files or directories have the EC pool in their extended
>>> attributes referenced.
>>> 
>>> But still this EC pool has appr 242 million objects with a total of ca
>>> 600 TB data stored. This shows up in "ceph df" and "ceph pg dump".
>>> 
>>> The objects can be listed with "rados ls" but a "rados stat" or "rados
>>> get" will yield an error:
>>> 
>>> error stat-ing cephfs_data_ec/1001275b3fe.00000241: (2) No such file or
>>> directory
>>> 
>>> How can this be?
>>> Are these artifacts from not properly removed snapshots?
>>> 
>>> Is it really save to remove this pool from CephFS and delete it?
>>> 
>>> Regards
>>> --
>>> Robert Sander
>>> Heinlein Support GmbH
>>> Linux: Akademie - Support - Hosting
>>> http://www.heinlein-support.de
>>> 
>>> Tel: 030-405051-43
>>> Fax: 030-405051-19
>>> 
>>> Zwangsangaben lt. §35a GmbHG:
>>> HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
>>> Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
>>> _______________________________________________
>>> ceph-users mailing list -- ceph-users@xxxxxxx
>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux