Re: Multi-MDS Failover

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/27/2018 07:11 PM, Patrick Donnelly wrote:
> The answer is that there may be partial availability from
> the up:active ranks which may hand out capabilities for the subtrees
> they manage or no availability if that's not possible because it
> cannot obtain the necessary locks.

additionally: if rank 0 is lost, the whole FS stands still (no new
client can mount the fs; no existing client can change a directory, etc.).

my guess is that the root of a cephfs (/; which is always served by rank
0) is needed in order to do traversals/lookups of any directories on the
top-level (which then can be served by ranks 1-n).


last year, we had quite some troubles with unstable cephfs (MDS reliably
and reproducibly crashing when hitting them with rsync over multi-TB
directories with files all being <<1mb) and had lots of situations where
ranks (most of the time including 0) were down.

fortunatly we could always get the fs back my unmounting it on all
clients, restarting all mds. the last of these unstabilities seem to
have gone with 12.2.3/12.2.4 (we're now running 12.2.5).

Regards,
Daniel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux