ceph durability calculation and test method

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all :
     I have some questions about the durability of ceph.  I am trying
to mesure the durability of ceph .I konw it should be related with
host and disk failing probability, failing detection time, when to
trigger the recover and the recovery time .  I use it with multiple
replication, say k replication. If I have N hosts, R racks, O osds per
host, ignoring the swich, how should I define the failure probability
of disk and host ? I think they should be independent, and should be
time-dependent . I google it , but find little thing about it . I see
AWS says it delivers 99.999999999% durability. How this is claimed ?
And  can I design some test method to prove the durability ? Or just
let it run long enough time and make the statistics ?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux