ceph stays degraded after crushmap rearrangement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi list,

i've rearranged my crushmap. Ceph was degraded about 18% and was recovering / rearranging fine.

But now it stays still and degraded status is rising??

2013-01-05 17:35:40.906587 mon.0 [INF] pgmap v2211269: 7632 pgs: 7632 active+remapped; 152 GB data, 312 GB used, 5023 GB / 5336 GB avail; 22/79086 degraded (0.028%)

...

2013-01-05 17:37:50.142106 mon.0 [INF] pgmap v2211386: 7632 pgs: 7632 active+remapped; 152 GB data, 312 GB used, 5023 GB / 5336 GB avail; 24/79090 degraded (0.030%)

..

2013-01-05 17:40:35.292054 mon.0 [INF] pgmap v2211526: 7632 pgs: 7632 active+remapped; 152 GB data, 313 GB used, 5023 GB / 5336 GB avail; 32/79106 degraded (0.040%)

I'm on currect testing branch.

Greets,
Stefan
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux