Re: Ceph Cluster Taking An Awful Long Time To Rebalance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



OK, so I set autoscaling to off for all five pools, and the "ceph -s" has not changed:

~~~

 cluster:
    id:     [REDACTED]
    health: HEALTH_WARN
            Reduced data availability: 256 pgs inactive, 256 pgs incomplete
            Degraded data redundancy: 12 pgs undersized

  services:
    mon: 1 daemons, quorum [REDACTED] (age 23h)
    mgr: [REDACTED](active, since 23h)
    osd: 7 osds: 7 up (since 22h), 7 in (since 22h); 32 remapped pgs

  data:
    pools:   5 pools, 288 pgs
    objects: 7 objects, 0 B
    usage:   7.1 GiB used, 38 TiB / 38 TiB avail
    pgs:     88.889% pgs not active
             6/21 objects misplaced (28.571%)
             256 creating+incomplete
             18  active+clean
             12  active+undersized+remapped
             2   active+clean+remapped

  progress:
    Rebalancing after osd.1 marked in (23h)
      [............................]
    PG autoscaler decreasing pool 1 PGs from 32 to 1 (21h)
      [............................]

~~~

Any ideas - or is this normal ie does this normally take this long?

(I'm wondering if I shouldn't tear down the cluster and start again?)

Cheers

Matthew J

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux