Re: Ceph recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I just responded to this on the thread "Strange remap on host failure". I think that response covers your question.

On Mon, May 29, 2017, 4:10 PM Laszlo Budai <laszlo@xxxxxxxxxxxxxxxx> wrote:
Hello,

can someone give me some directions on how the ceph recovery works?
Let's suppose we have a ceph cluster with several nodes grouped in 3 racks (2 nodes/rack). The crush map is configured to distribute PGs on OSDs from different racks.

What happens if a node fails? Where can I read a description of the actions performed by the ceph cluster in case of a node failure?

Kind regards,
Laszlo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux