Re: cephfs flags question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 18, 2020 at 6:28 AM Stefan Kooman <stefan@xxxxxx> wrote:
> >> I have searched through documentation but I don't see anything
> >> related. It's also not described / suggested in the part about upgrading
> >> the MDS cluster (IMHO that would be a logical place) [1].
> >
> > You're the first person I'm aware of asking for this. :)
>
> Somehow I'm not suprised :-). But on the other hand, I am. I will try to
> explain this. Maybe this works best with an example.
>
> Nautilus 14.2.4 cluster here (upgraded from luminous, mimic).
>
> relevant part of ceph -s:
>
>   mds: cephfs:1 {0=mds2=up:active} 1 up:standby-replay
>
> ^^ Two MDSes in this cluster: one active and one standby-replay.
>
> ceph fs get cephfs |grep flags
> flags   1e
>
> Let's try to enable standby replay support:
>
> ceph fs set cephfs allow_standby_replay true
>
> That worked, did flags change?
>
> ceph fs get cephfs |grep flags
> flags   3e
>
> Yes! But why?

Well that's interesting. I don't have an explanation unfortunately.
You upgraded the MDS too, right? Only scenario that could cause this I
can think of is that the MDS were never restarted/upgraded to
nautilus.

> I would expect the support for standby replay to have been
> enabled already. How else would it work even without setting this fs
> feature. But apparently it does, and does not need this feature to be
> enabled like this. And that might explain that nobody ever wondered how
> to change "ceph fs flags" in the first place. Is that correct?
>
> At this point I ask myself the question: who / what uses the cephfs
> flags, and what fort. Do I, as a storage admin, need to care about this
> at all?

Operators should only care about "is X flag turned on" but we don't
really show that very well in the MDSMap dump. I'll make a note to
improve that. We'd really rather not that operators need to do bitwise
arithmetic on the flags bitfield to determine what features are turned
on.

https://tracker.ceph.com/issues/48683

> But hey, here we are, and now I would like to undersand it.
>
> If, just for the sake of upgrading clusters to have identical features,
> I would like to "upgrade" the cephfs to support all ceph fs features, I
> seem not to be able to do that:
>
> ceph fs set cephfs
> Invalid command: missing required parameter
> var(max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client)
> fs set <fs_name>
> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client
> <val> {--yes-i-really-mean-it} :  set fs parameter <var> to <val>
> Error EINVAL: invalid command
>
> I can choose between these:
> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client
>
> Most of them I would not like to set at all (i.e down, joinable,
> max_mds) as they are not "features" but merely a way to set the ceph fs
> in a certain STATE.
>
> So my question is: what do I need to enable to get an upgraded fs to get
> say "flags 12" (Nautilus with snapshot support enabled AFAIK). Is that
> at all possible?

I'll also add a note to list features that can be turned on:
https://tracker.ceph.com/issues/48682

> The reason why I started this whole thread was to eliminate any Ceph
> config related difference between production and test. But maybe I
> should ask a different question: Does a (ceph-fuse / kernel) client use
> the  *cephfs flags* bit at all? If not than we don't have to focus on
> this, and we can conclude we cannot reproduce the issue on our test
> environment.

ceph-fuse/kernel client don't use these flags. Only the MDS.

> I hope above makes sense to you ;-).
>
> Thanks,
>
> Gr. Stefan
>


-- 
Patrick Donnelly, Ph.D.
He / Him / His
Principal Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux