Re: Multiple CephFS creation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You already have the correct option, there's not much to it:

mount -t ceph mon1,mon2,mon3:/<path>/ -o name=<client>,secretfile=<keyring_file>,mds_namespace=<otherfs> /<mountpoint>/

If your caps and path restrictions are correct this should work.


Zitat von Jarett DeAngelis <jarett@xxxxxxxxxxxx>:

Thanks. I’m now trying to figure out how to get Proxmox to pass the “-o mds_namespace=otherfs” option to its mounting of the filesystem, but that’s a bit out of scope for this list (though if anyone has done this please let me know!).

On Mar 31, 2020, at 2:15 PM, Nathan Fish <lordcirth@xxxxxxxxx> wrote:

Yes, standby (as opposed to standby-replay) MDS' form a shared pool
from which the mons will promote an MDS to the required role.

On Tue, Mar 31, 2020 at 12:52 PM Jarett DeAngelis <jarett@xxxxxxxxxxxx> wrote:

So, for the record, this doesn’t appears to work in Nautilus.



Does this mean that I should just count on my standby MDS to “step in” when a new FS is created?

On Mar 31, 2020, at 3:19 AM, Eugen Block <eblock@xxxxxx> wrote:

This has changed in Octopus. The above config variables are removed.
Instead, follow this procedure.:

https://docs.ceph.com/docs/octopus/cephfs/standby/#configuring-mds-file-system-affinity

Thanks for the clarification, IIRC I had troubles applying the mds_standby settings in Nautilus already, but I haven't verified yet so I didn't mention that in my response. I'll take another look at it.


Zitat von Patrick Donnelly <pdonnell@xxxxxxxxxx>:

On Mon, Mar 30, 2020 at 11:57 PM Eugen Block <eblock@xxxxxx> wrote:
For the standby daemon you have to be aware of this:

By default, if none of these settings are used, all MDS daemons
which do not hold a rank will
be used as 'standbys' for any rank.
[...]
When a daemon has entered the standby replay state, it will only be
used as a standby for
the rank that it is following. If another rank fails, this standby
replay daemon will not be
used as a replacement, even if no other standbys are available.

Some of the mentioned settings are for example:

mds_standby_for_rank
mds_standby_for_name
mds_standby_for_fscid

The easiest way is to have one standby daemon per CephFS and let them
handle the failover.

This has changed in Octopus. The above config variables are removed.
Instead, follow this procedure.:

https://docs.ceph.com/docs/octopus/cephfs/standby/#configuring-mds-file-system-affinity

--
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux