pg inactive+remapped

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I don't understand why my Global Recovery Event never finish...
I have 3 hosts, all osd and hosts are up. My pools are replica*3

# ceph status
  cluster:
    id:     0a77af8a-414c-11ec-908a-005056b4f234
    health: HEALTH_WARN
            Reduced data availability: 1 pg inactive
            Degraded data redundancy: 1/1512 objects degraded (0.066%), 1
pg degraded, 1 pg undersized

  services:
    mon: 3 daemons, quorum
preprod-ceph1-mon1,preprod-ceph1-mon3,preprod-ceph1-mon2 (age 2d)
    mgr: preprod-ceph1-mon3.ssvflc(active, since 3d), standbys:
preprod-ceph1-mon1.giaate, preprod-ceph1-mon2.yducxr
    osd: 12 osds: 12 up (since 28m), 12 in (since 29m); 1 remapped pgs

  data:
    pools:   2 pools, 742 pgs
    objects: 504 objects, 1.8 GiB
    usage:   4.4 TiB used, 87 TiB / 92 TiB avail
    pgs:     0.135% pgs not active
             1/1512 objects degraded (0.066%)
             741 active+clean
             1   activating+undersized+degraded+remapped

  io:
    client:   0 B/s rd, 51 KiB/s wr, 2 op/s rd, 2 op/s wr

  progress:
    Global Recovery Event (39m)
      [===========================.] (remaining: 3s)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux