Hi Frank,
in that case I would probably wait a bit as well if no clients complain.
I guess one could try to scrub only a single directory instead of "/",
I assume it should be possible to identify the affected directory from
the log output you provided.
Have a calm weekend! ;-)
Eugen
Zitat von Frank Schilder <frans@xxxxxx>:
Hi Eugen,
thanks for the fast response. My search did not find that blog,
thanks for sending the link.
Yes, our recent troubles have to do with forward scrub. Since
nothing crashes I'm not sure if these errors are serious and/or
fixed on the fly. I think we will wait with another forward scrub
for a while.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
________________________________________
From: Eugen Block <eblock@xxxxxx>
Sent: Friday, January 24, 2025 11:40 PM
To: ceph-users@xxxxxxx
Subject: Re: unmatched rstat rbytes on single dirfrag
Hi, a quick search [0] shows the same messages. A scrub with repair
seems to fix that. But wasn’t scrubbing causing the recent issue in
the first place?
[0] https://silvenga.com/posts/notes-on-cephfs-metadata-recovery/
Zitat von Frank Schilder <frans@xxxxxx>:
Hi all,
I see error messages like these in the logs every now and then:
10:14:44 [ERR] unmatched rstat rbytes on single dirfrag 0x615, inode
has n(v2211 rc2038-01-18T21:22:13.000000+0100 b2506575730
9676264=3693+9672571), dirfrag has n(v2211
rc2025-01-24T10:14:44.628760+0100 b30517 102=3+99)
10:14:44 [ERR] unmatched fragstat size on single dirfrag 0x615,
inode has f(v67 m2025-01-24T10:14:44.628760+0100
9676556=3819+9672737), dirfrag has f(v67
m2025-01-24T10:14:44.628760+0100 102=3+99)
10
/var/log/ceph/ceph.log-20250123:2025-01-22T17:56:18.060011+0100
mds.ceph-11 (mds.6) 4 : cluster [ERR] unmatched fragstat on 0x641,
inode has f(v14 m2025-01-18T11:59:21.676346+0100 2801=913+1888),
dirfrags have f(v0 m2025-01-18T11:59:21.676346+0100 427=9+418)
/var/log/ceph/ceph.log-20250123:2025-01-22T17:56:18.076061+0100
mds.ceph-11 (mds.6) 5 : cluster [ERR] inconsistent rstat on inode
0x641, inode has n(v194 rc2038-01-07T22:42:17.000000+0100
b46230814357 2802=913+1889), directory fragments have n(v0
rc2032-04-29T16:39:38.000000+0200 b46204251164 428=9+419)
How critical are these? Everything seems to work normal.
Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx