maximum rebuild speed for erasure coding pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

I have a naive question about the way and the maximum rebuild speed
for erasure coding pool. I did some search, but could not find any
formal and detailed information about this.

For pool recovering, the way Ceph works(to my understanding) is: each
active OSD scrubs the drive, and if it find any degraded PGs, it will
then try to recover/rebuild it, and distribute the recovered PGs to
the remaining OSDs?

For erasure pool, suppose I have 10 nodes, each has 10 6TB drives, so
in total 100 drives. I make a 4+2 erasure pool, failure domain is
host/node. Then if one drive failed, (assume the 6TB is fully used),
what the maximum speed the recovering process can have? Also suppose
the cluster network is 10GbE, each disk has maximum 200MB/s sequential
throughput.

I suppose that each OSD(or master OSD?) will try to rebuild by reading
in other related shards and rebuild the lost one, and then
redistribute it to other OSD?

Besides, when recovering, will mgr, mds, mon also work for it?

Thanks!

Best,

Feng
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux