Tob <me@xxxxxxxxxx> 于 2018年6月6日周三 22:21写道:
Hi!
Thank you for your reply.
I just did:
> The correct commands should be:
>
> ceph daemon <mds of rank 0> scrub_path / force recursive repair
> ceph daemon <mds of rank 0> scrub_path '~mdsdir' force recursive repair
They returned instantly and in the mds' logfile only the following
appeared.
2018-06-06 16:05:52.467 7f19c6a70700 1 mds.node03 asok_command: scrub_path (starting...)
2018-06-06 16:05:52.467 7f19c6a70700 1 mds.node03 asok_command: scrub_path (complete)
2018-06-06 16:06:11.788 7f19c6a70700 1 mds.node03 asok_command: scrub_path (starting...)
2018-06-06 16:06:11.788 7f19c6a70700 1 mds.node03 asok_command: scrub_path (complete)
recursive scrub runs in back ground
`damage ls` still returned as many damages as before.
When I force repaired one of the dentry damages, I got return_code -5:
> ceph daemon mds.node03 scrub_path /path/to/file force repair
{
"return_code": -5
}
Unfortunately I started getting I/O errors on some of the files with
damage_type dentry :/.
Out of desperation, I restarted mds.node03 (the old rank-0 mds) and up
came another mds. The errors disappeared. Is that expected?
Damage table is not persistent. its contents get lost after MDS restarts. The problem is that the scrub (non repair) marks inode/dentry damaged even it just find minor issue(such as outdated snap format). please always add the 'repair' parameter when running scrub_path command.
Is there a chance of data corruption because of these errors?
Tobias Florek
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com