On Wed, Mar 7, 2018 at 2:29 PM, John Spray <jspray@xxxxxxxxxx> wrote: > On Wed, Mar 7, 2018 at 10:11 AM, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote: >> Hi all, >> >> What is the purpose of >> >> ceph mds set max_mds <int> >> >> ? >> >> We just used that by mistake on a cephfs cluster when attempting to >> decrease from 2 to 1 active mds's. >> >> The correct command to do this is of course >> >> ceph fs set <fsname> max_mds <int> >> >> So, is `ceph mds set max_mds` useful for something? If not, should it >> be removed from the CLI? > > It's the legacy version of the command from before we had multiple > filesystems. Those commands are marked as obsolete internally so that > they're not included in the --help output, Ahhh! It is indeed omitted from --help but I hadn't noticed because it is still rather helpful if you go ahead and run the command: # ceph mds set Invalid command: missing required parameter var(max_mds|max_file_size|allow_new_snaps|inline_data|allow_multimds|allow_dirfrags) mds set max_mds|max_file_size|allow_new_snaps|inline_data|allow_multimds|allow_dirfrags <val> {<confirm>} : set mds parameter <var> to <val> Error EINVAL: invalid command I suppose we just need a new generation of operators that would never even try these old deprecated commands ;) > but they're still handled > (applied to the "default" filesystem) if called. Hmm... does it apply if we never set the default fs (though only have one) ? (How do we even see/get the default fs?) What happened in our case is that I did `ceph mds set max_mds 1` then deactivated rank 2. This caused some sort of outage which deadlocked the mds's (they recovered after restarting). I assume the outage happened because I deactivated rank 2 while we still had max_mds=2 at the fs scope (and we had no standbys -- due to the v12.2.2->4 upgrade breakage). Thanks John! Dan > > The multi-fs stuff went in for Jewel, so maybe we should think about > removing the old commands in Mimic: any thoughts Patrick? > > John > >> >> Cheers, Dan >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com