Re: VM storage and OSD Ceph failures

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The VM read will hang until a replica gets promoted and the VM resends the read. In a healthy cluster with default settings this will take about 15 seconds.
-Greg

On Tuesday, September 17, 2013, Gandalf Corvotempesta wrote:
Hi to all.
Let's assume a Ceph cluster used to store VM disk images.
VMs will be booted directly from the RBD.

What will happens in case of OSD failure if the failed OSD is the
primary where VM is reading from ?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux