Re: How to fix mon scrub errors?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Burkhard,

Zitat von Burkhard Linke <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>:
HI,


since the upgrade to luminous 12.2.2 the mons are complaining about
scrub errors:


2017-12-13 08:49:27.169184 mon.ceph-storage-03 [ERR] scrub mismatch

today two such messages turned up here, too, in a cluster upgraded to 12.2.2 over the weekend.

--- cut here ---
2017-12-19 09:28:29.180583 7fe171845700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=4023984760}) 2017-12-19 09:28:29.183685 7fe171845700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=1698072116}) 2017-12-19 09:28:29.186730 7fe171845700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=3505522493}) 2017-12-19 09:28:29.189709 7fe171845700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2: ScrubResult(keys {logm=100} crc {logm=3854110003}) 2017-12-19 09:28:29.192081 7fe171845700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2: ScrubResult(keys {logm=85,mds_health=15} crc {logm=239151592,mds_health=1391313747}) 2017-12-19 09:28:29.193563 7fe171845700 -1 log_channel(cluster) log [ERR] : scrub mismatch 2017-12-19 09:28:29.193596 7fe171845700 -1 log_channel(cluster) log [ERR] : mon.0 ScrubResult(keys {mds_health=7,mds_metadata=1,mdsmap=92} crc {mds_health=604545522,mds_metadata=3932958966,mdsmap=1333403161}) 2017-12-19 09:28:29.193615 7fe171845700 -1 log_channel(cluster) log [ERR] : mon.1 ScrubResult(keys {mds_health=8,mds_metadata=1,mdsmap=91} crc {mds_health=1003932403,mds_metadata=3932958966,mdsmap=1897035448}) 2017-12-19 09:28:29.193638 7fe171845700 -1 log_channel(cluster) log [ERR] : scrub mismatch 2017-12-19 09:28:29.193657 7fe171845700 -1 log_channel(cluster) log [ERR] : mon.0 ScrubResult(keys {mds_health=7,mds_metadata=1,mdsmap=92} crc {mds_health=604545522,mds_metadata=3932958966,mdsmap=1333403161}) 2017-12-19 09:28:29.193684 7fe171845700 -1 log_channel(cluster) log [ERR] : mon.2 ScrubResult(keys {mds_health=8,mds_metadata=1,mdsmap=91} crc {mds_health=1003932403,mds_metadata=3932958966,mdsmap=1897035448}) 2017-12-19 09:28:29.194957 7fe171845700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2: ScrubResult(keys {mdsmap=100} crc {mdsmap=3440145783}) 2017-12-19 09:28:29.196308 7fe171845700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2: ScrubResult(keys {mdsmap=100} crc {mdsmap=1425524862}) 2017-12-19 09:28:29.197593 7fe171845700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2: ScrubResult(keys {mdsmap=100} crc {mdsmap=3092285774}) 2017-12-19 09:28:29.198871 7fe171845700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2: ScrubResult(keys {mdsmap=100} crc {mdsmap=1144015866}) 2017-12-19 09:28:29.200207 7fe171845700 0 log_channel(cluster) log [DBG] : scrub ok on 0,1,2: ScrubResult(keys {mdsmap=100} crc {mdsmap=3585539515})
--- cut here ---

Ceph state remained the same even with this in the mon logs. It was only in the logs of a single mon. I also checked the MDS log files, nothing to be found there (and especially not at that time).

These errors might have been caused by problems setting up multi mds
after luminous upgrade.

we have only a single active MDS, plus one standby - so maybe it's something different.

Regards,
J

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux