Local Device Health PG inconsistent

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Trying to narrow down a strange issue where the single PG for the device_health_metrics that was created when I enabled the 'diskprediction_local' module in the ceph-mgr. But I never see any inconsistent objects in the PG.

$ ceph health detail
OSD_SCRUB_ERRORS 1 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsistent
    pg 30.0 is active+clean+inconsistent, acting [128,12,183]

$ rados list-inconsistent-pg device_health_metrics
["30.0"]

$ rados list-inconsistent-obj 30.0 | jq
{
  "epoch": 172979,
  "inconsistents": []
}

This is the log message from osd.128 most recently during the last deep scrub
2019-09-12 18:07:19.436 7f977744a700 -1 log_channel(cluster) log [ERR] : 30.0 deep-scrub : stat mismatch, got 237/238 objects, 0/0 clones, 237/238 dirty, 237/238 omap, 0/0 pinned, 0/0 hit_set_archive, 0/0 whiteouts, 0/0 bytes, 0/0 manifest objects, 0/0 hit_set_archive bytes.

Here is a pg query on the one PG:

The data I have collected hasn't been useful at all, and I don't particularly care if I lose it, so would it be feasible (ie no bad effects) to just disable the disk prediction module, delete the pool, and then start over and it will create a new pool for itself?

Thanks,

Reed

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux