Hi!
We have read https://docs.ceph.com/en/latest/man/8/mount.ceph, and would
like to see our expectations confirmed (or denied) here. :-)
Suppose we build a three-node cluster, three monitors, three MDSs, etc,
in order to export a cephfs to multiple client nodes.
On the (RHEL8) clients (web application servers) fstab, we will mount
the cephfs like:
cehp1,ceph2,ceph3:/ /mnt/ha-pool/ ceph name=admin,secretfile=/etc/ceph/admin.secret,noatime 0 2
We expect that the RHEL clients will then be able to use (read/write) a
shared /mnt/ha-pool directory simultaneously.
Our question: how HA can we expect this setup to be? Looking for some
practical experience here.
Specific: Can we reboot any of the three involved ceph servers without
the clients noticing anything? Or will there be certain timeouts
involved, during which /mnt/ha-pool/ will appear unresposive, and
*after* a timeout the client switches monitor node, and /mnt/ha-pool/
will respond again?
Of course we hope the answer is: in such a setup, cephfs clients should
not notice a reboot at all. :-)
All the best!
MJ
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx