Cephadm quincy 17.2.5 always shows slowops in all OSDs and Ceph orch stuck

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm running a quincy 17.2.5 ceph cluster, with 3 nodes that each have 3 disks with replica size 3 min_size 2, my cluster was running fine before, and suddenly it can't read and write because ceph slowops on all osd. I tried restarting osd, then degraded and misplaced data appeared, and slow ops reappeared. Any suggestions guys to solve this issues ?

# ceph -s
  cluster:
    id:     bb6590a9-155c-4830-aea7-ef4b5a098466
    health: HEALTH_ERR
            2 failed cephadm daemon(s)
            1/282019 objects unfound (0.000%)
            noout flag(s) set
            Reduced data availability: 79 pgs inactive, 47 pgs peering
            Possible data damage: 1 pg recovery_unfound
            Degraded data redundancy: 26340/838103 objects degraded (3.143%), 16 pgs degraded, 11 pgs undersized
            302 slow ops, oldest one blocked for 226773 sec, daemons [osd.2,osd.3,osd.4,osd.6,osd.7] have slow ops.

  services:
    mon: 3 daemons, quorum r2c2,r2c3,r2c1 (age 2h)
    mgr: r2c3(active, since 15m), standbys: r2c2, r2c1.ivwibd
    osd: 9 osds: 9 up (since 9m), 9 in (since 3d); 30 remapped pgs
         flags noout

  data:
    pools:   12 pools, 329 pgs
    objects: 282.02k objects, 1.3 TiB
    usage:   4.9 TiB used, 94 TiB / 99 TiB avail
    pgs:     24.924% pgs not active
             26340/838103 objects degraded (3.143%)
             5411/838103 objects misplaced (0.646%)
             1/282019 objects unfound (0.000%)
             240 active+clean
             31  peering
             23  activating
             16  remapped+peering
             7   activating+undersized+degraded+remapped
             2   activating+remapped
             2   active+undersized+degraded+remapped+backfilling
             2   active+recovering+degraded
             2   activating+degraded
             1   active+recovery_unfound+undersized+degraded+remapped
             1   activating+degraded+remapped
             1   active+clean+scrubbing+deep
             1   active+recovery_wait+undersized+degraded+remapped

  io:
    client:   12 KiB/s rd, 1.3 KiB/s wr, 7 op/s rd, 1 op/s wr

Thanks,
Pahrial
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux