pgs inconsistent

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear folks,

I had a Ceph cluster with replication 2, 3 nodes, each node with 3 OSDs, on Luminous 12.2.12. Some days ago i had one OSD down (the disk is still fine) due to some errors on rocksdb crash. I tried to restart that OSD but failed. So I tried to rebalance but encountered PGs inconsistent.

what can i do to make the cluster working again?

thanks a lot for helping me out

Samuel 

**********************************************************************************
# ceph -s
  cluster:
    id:     289e3afa-f188-49b0-9bea-1ab57cc2beb8
    health: HEALTH_ERR
            pauserd,pausewr,noout flag(s) set
            191444 scrub errors
            Possible data damage: 376 pgs inconsistent
 
  services:
    mon: 3 daemons, quorum horeb71,horeb72,horeb73
    mgr: horeb73(active), standbys: horeb71, horeb72
    osd: 9 osds: 8 up, 8 in
         flags pauserd,pausewr,noout
 
  data:
    pools:   1 pools, 1024 pgs
    objects: 524.29k objects, 1.99TiB
    usage:   3.67TiB used, 2.58TiB / 6.25TiB avail
    pgs:     645 active+clean
             376 active+clean+inconsistent
             3   active+clean+scrubbing+deep
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux