What I have been doing with CephFS is make a number of hosts export the same CephFS mountpoints i.e cephfs01:/cephfs/home cephfs02:/cephfs/home … I then put the hosts all under a common DNS A record i.e “cephfs-nfs” so it resolves to all of the hosts exporting the share. I then use autofs on the clients to mount the share as needed with the source host being “cephfs-nfs:/cephfs/home”. Autofs will automatically
pick one of the hosts to mount and in the event it becomes unavailable autofs will remount using one of the other hosts in the A record. If you wanted you could get more funky with the automount maps and use priorities and list the hosts individually, but the above is
simple and seems to works well. From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx]
On Behalf Of Smith, Eric We did this with RBD, pacemaker, and corosync without issue - not sure about CephFS though. You might have to use something like sanlock maybe? From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx>
on behalf of nigel davies <nigdav007@xxxxxxxxx> Hay all
Can any one advise on how it can do this. I have set up an test ceph cluster 3 osd system 2 NFS servers I want to set up the two NFS serves as an failover process. So if one fails the other starts up. I have tried an few ways and getting stuck any advise would be gratefully received on this one |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com