Hi, a quick search [0] shows the same messages. A scrub with repair
seems to fix that. But wasn’t scrubbing causing the recent issue in
the first place?
[0] https://silvenga.com/posts/notes-on-cephfs-metadata-recovery/
Zitat von Frank Schilder <frans@xxxxxx>:
Hi all,
I see error messages like these in the logs every now and then:
10:14:44 [ERR] unmatched rstat rbytes on single dirfrag 0x615, inode
has n(v2211 rc2038-01-18T21:22:13.000000+0100 b2506575730
9676264=3693+9672571), dirfrag has n(v2211
rc2025-01-24T10:14:44.628760+0100 b30517 102=3+99)
10:14:44 [ERR] unmatched fragstat size on single dirfrag 0x615,
inode has f(v67 m2025-01-24T10:14:44.628760+0100
9676556=3819+9672737), dirfrag has f(v67
m2025-01-24T10:14:44.628760+0100 102=3+99)
10
/var/log/ceph/ceph.log-20250123:2025-01-22T17:56:18.060011+0100
mds.ceph-11 (mds.6) 4 : cluster [ERR] unmatched fragstat on 0x641,
inode has f(v14 m2025-01-18T11:59:21.676346+0100 2801=913+1888),
dirfrags have f(v0 m2025-01-18T11:59:21.676346+0100 427=9+418)
/var/log/ceph/ceph.log-20250123:2025-01-22T17:56:18.076061+0100
mds.ceph-11 (mds.6) 5 : cluster [ERR] inconsistent rstat on inode
0x641, inode has n(v194 rc2038-01-07T22:42:17.000000+0100
b46230814357 2802=913+1889), directory fragments have n(v0
rc2032-04-29T16:39:38.000000+0200 b46204251164 428=9+419)
How critical are these? Everything seems to work normal.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx