scrub error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

ceph version 13.2.4 (b10be4d44915a4d78a8e06aa31919e74927b142e) mimic (stable)
bluestore on all osd.

I got an cluster error this morning:

HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
OSD_SCRUB_ERRORS 1 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsistent
    pg 16.1d5 is active+clean+inconsistent, acting [88,82,62]

ceph pg repair 16.1d5
instructing pg 16.1d5 on osd.88 to repair

In osd.88 log file I see:

2019-02-25 09:53:06.595 7fc579924700 -1 log_channel(cluster) log [ERR] : 16.1d5 scrub : stat mismatch, got 1645/1644 objects, 0/0 clones, 804/804 dirty, 0/0 omap, 0/0 pinned, 5/4 hit_set_archive, 479/479 whiteouts, 4868735734/4868735611 bytes, 0/0 manifest objects, 758/635 hit_set_archive bytes.
2019-02-25 09:53:06.595 7fc579924700 -1 log_channel(cluster) log [ERR] : 16.1d5 scrub 1 errors

Ok. Do repair
ceph pg repair 16.1d5
instructing pg 16.1d5 on osd.88 to repair

Ceph succesful repair and in log I see:

2019-02-25 12:19:00.481 7fc57591c700  0 log_channel(cluster) log [DBG] : 16.1d5 repair ok, 0 fixed

Two questions:
1. Why pg broke?
2. "repair ok, 0 fixed" - it's notmal?

Thanks in advance for the answers and help.

WBR,
    Fyodor.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux