Feature Request ceph -s recovery and resync estimated completion times

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I seem to be in the degraded data redundancy mode most of the time and
a lot of recovery io going on. I use ceph -s to get an idea of how
long before I go to active+clean. With smaller test setups, I could
see 20-40% and within a short time I would be active+clean, but as I
test with more data, even tho I see a smaller %, it seems like it is
taking forever. I don't have an idea of how long in estimated hours
recovery will complete. I've shown some mdadm RAID outputs that I find
very helpful as I can tell by looking at the finish information to get
an idea of how long to come back and recheck the status.

It would be nice to see that out of Ceph if possible to give some
estimation of completion based on current instantaneous recovery
rates.

 90774/1418378 objects misplaced (6.400%)
            Degraded data redundancy: 48414/1418378 objects degraded
(3.413%), 16 pgs unclean, 10 pgs degraded, 4 pgs undersized



md0 : active raid1 sdc1[2] sdb1[3]
      2095040 blocks super 1.2 [2/1] [U_]
      [>....................]  recovery =  0.4% (9600/2095040)
finish=3.6min speed=9600K/sec



md0 : active raid1 sdb1[0] sdc1[1]
     2095040 blocks super 1.2 [2/2] [UU]
      [===>.................]  resync = 19.7% (414656/2095040)
finish=0.2min speed=138218K/sec
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux