Hi Venky,
On Wed, 14 Dec 2022, Venky Shankar wrote:
On Tue, Dec 13, 2022 at 6:43 PM Sascha Lucas <ceph-users@xxxxxxxxx> wrote:
Just an update: "scrub / recursive,repair" does not uncover additional
errors. But also does not fix the single dirfrag error.
File system scrub does not clear entries from the damage list.
The damage type you are running into ("dir_frag") implies that the
object for directory "V_7770505" is lost (from the metadata pool).
This results in files under that directory to be unavailable. Good
news is that you can regenerate the lost object by scanning the data
pool. This is documented here:
https://docs.ceph.com/en/latest/cephfs/disaster-recovery-experts/#recovery-from-missing-metadata-objects
(You'd need not run the cephfs-table-tool or cephfs-journal-tool
command though. Also, this could take time if you have lots of objects
in the data pool)
Since you mention that you do not see directory "CV_MAGNETIC" and no
other scrub errors are seen, it's possible that the application using
cephfs removed it since it was no longer needed (the data pool might
have some leftover object though).
Thanks a lot for your help. Just to be clear: it's the directory structure
CV_MAGNETIC/V_7770505, where V_7770505 can not be seen/found. But the
parent dir CV_MAGNETIC still exists.
However it strengthens the idea that the application has removed the
V_7770505 directory itself. Otherwise it is expected to still find/see
this directory, but empty. Right?
If that is the case, there is no data needing recovery, just a cleanup of
orphan objects.
Also very helpful: what part of the disaster-recovery-experts docs to run
and what commands to skip. This seems to boil down to:
cephfs-data-scan init|scan_extents|scan_inodes|scan_links|cleanup
The data pool has ~100M objects. I doubt data scanning can not be
done while the filesystem is online/in use?
Just a mystery remains how this damage could happen...
Thanks, Sascha.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx