Hi Robert,
although I would assume that deleting the pool is safe, I'd rather try
to get to the bottom of this as well.
Do you still have access to the directories to check for snapshots
(.snap directories underneath the root filesystem mount)?
Zitat von Robert Sander <r.sander@xxxxxxxxxxxxxxxxxxx>:
Hi,
there is an old cluster (9 years) that gets constantly upgraded and
is currently running version 17.2.7.
3 years ago (when running version 16) a new EC pool was added to the
existing CephFS to be used with the directory layout feature.
Now it was decided to remove that pool again. On the filesystem
level all files have been copied to the original replicated pool and
then deleted. No files or directories have the EC pool in their
extended attributes referenced.
But still this EC pool has appr 242 million objects with a total of
ca 600 TB data stored. This shows up in "ceph df" and "ceph pg dump".
The objects can be listed with "rados ls" but a "rados stat" or
"rados get" will yield an error:
error stat-ing cephfs_data_ec/1001275b3fe.00000241: (2) No such file
or directory
How can this be?
Are these artifacts from not properly removed snapshots?
Is it really save to remove this pool from CephFS and delete it?
Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de
Tel: 030-405051-43
Fax: 030-405051-19
Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx