Re: calculating maximum number of disk and node failure that can be handled by cluster with out data loss

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

If you are using the default rule set (which I think has min_size 2),
you can sustain 1-4 disk failures or one host failures.

The reason disk failures vary so wildly is that you can lose all the
disks in host.

You can lose up to another 4 disks (in the same host) or 1 host
without data loss, but I/O will block until Ceph can replicate at
least one more copy (assuming the min_size 2 stated above).
- ----------------
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Tue, Jun 9, 2015 at 9:53 AM, kevin parrikar  wrote:
> I have 4 node cluster each with 5 disks (4 OSD and 1 Operating system also
> hosting 3 monitoring process) with default replica 3.
>
> Total OSD disks : 16
> Total Nodes : 4
>
> How can i calculate the
>
> Maximum number of disk failures my cluster can handle with out  any impact
> on current data and new writes.
> Maximum number of node failures  my cluster can handle with out any impact
> on current data and new writes.
>
> Thanks for any help
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v0.13.1
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJVdxBACRDmVDuy+mK58QAAfIoQAK0ozApTIkk1dzAdONyX
vJ7r6q4LRQF9OAzA2qRKZdLL9R7bk2i5+VRJ8+Xst/jV9F4jEp+Owy+bZ5JL
F6C3/5tH8fco1enVsYJlivFhOUZij2RpUupFViWe5rDmq0EPwZC3cmFYlA2n
UYtzDqAvOWeNQTUYlE7Ya4+prZexLFofz+N3+k8XylEI0w4++6iR8znxGSfE
jtyXW/zzlZiLO1LZ4vbDviWRk7SRmE5dJV6Tc5HUEmkAB7lgkVJriBpHHY3V
vIs5J5xXB+VH09Y+Ka4E/okyKt/tVd36NMvWz2v9xluOXFb1iLK9yQMyHeqr
JbynllpM5E8JdBTvQq8eW2khZ2q2NaIugoBvhGWGQluoQz0WN82EdevY137a
qR4j2xpHaG0oMwuWgMtUzpg0HcSccs+UQKVzkFCLXBlNnW4m/W63EfZMmh2B
nusQ0LGVoB4EjFTGE5wHabOqUOdkaPCM/pSh9UTw6COXc8ytTbK4FCS3msiO
BvmSYWoQFINfz6bOR2mpud1fB1k+nvEheECC3wZzbEo1w5bMx6lOdLt0kIe4
hJzR7o4TcfNoR/N3CGlfN6d+pk8yxoxVvcIiGTf3uRZZep+t8w6kyrA5XxlR
orvhDwdVOkGVQL8jYGzelWk+Er9ILvHUsL4Semx4PEv8xAR9Dx//UHzyrviQ
YJsn
=B31a
-----END PGP SIGNATURE-----
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux