Re: cephfs (fuse and Kernal) HA

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 17-01-02 06:24, Lindsay Mathieson wrote:
Hi all, familiar with ceph but out of touch on cephfs specifics, so some quick questions:

- cephfs requires a MDS for its metadata (file/dir structures, attributes etc?

yes
- Its Active/Passive, i.e only one MDS can be active at a time, with a number of backup passive MDS's

if we are looking into stable features then yes. There is experimental feature of multiple active MDSes, but it's experimental.
- The passive MDS's, are they atomically up to date with the active MDS? no lag?
metadata is on Ceph cluster itself so there is no syncing between MDSes. New MDS need to go through few phases before it becomes active.

- How robust is the HA? if there a catastrophic failure on the active MDS (e.g power off) would active I/O on the cephfs be interupted? any risk of data or meta data loss? (excluding async transactions).

power off is not a catastrophic failure :) Data traffic does not got through MDS so active IO doesn't get interrupted, but client may get stalls when trying to do metadata operations until new MDS becomes active. Clients should retry all not confirmed operations so unless something goes wrong there should be no data/metadata loss.
- Could the MDS db be corrupted by the above?

once I managed to corrupt MDS journal (it wasn't due to MDS server failure) but I wasn't able to repro it again, it might be related to env where it was running.

I guess some of the answers can only be subjective, but I'm looking for real world experiences. I have been stress testing other systems with similar architectures and been less than impressed by the results. Corruptions of every file with active I/Ois not a good outcome :( Use case is hosting VM's.

Did you consider Ceph RBD for VM hosting?

thanks,


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux