ceph durability calculation and test method

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all :
     I have some questions about the durability of ceph.  I am trying
to mesure the durability of ceph .I konw it should be related with
host and disk failing probability, failing detection time, when to
trigger the recover and the recovery time .  I use it with multiple
replication, say k replication. If I have N hosts, R racks, O osds per
host, ignoring the swich, how should I define the failure probability
of disk and host ? I think they should be independent, and should be
time-dependent . I google it , but find little thing about it . I see
AWS says it delivers 99.999999999% durability. How this is claimed ?
And  can I design some test method to prove the durability ? Or just
let it run long enough time and make the statistics ?
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux