Re: cephfs flags question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 12/17/20 8:57 PM, Patrick Donnelly wrote:
On Thu, Dec 17, 2020 at 11:35 AM Stefan Kooman <stefan@xxxxxx> wrote:

On 12/17/20 7:45 PM, Patrick Donnelly wrote:


When a file system is newly created, it's assumed you want all the
stable features on, including multiple MDS, directory fragmentation,
snapshots, etc. That's what those flags are for. If you've been
upgrading your cluster, you need to turn those on yourself.
OK, fair enough. So I tried adding "allow_dirfrags" which gives me:

ceph fs set cephfs allow_dirfrags true
Directory fragmentation is now permanently enabled. This command is
DEPRECATED and will be REMOVED from future releases.

And I enabled snapshot support:

ceph fs set cephfs allow_new_snaps true
enabled new snapshots

However, this has not changed the "flags" of the filesystem in any way.
So I guess there are still features not enabled that are enabled on
newly installed clusters. Where can I find a list of features that I can
enable?

Apologies for linking code:
https://github.com/ceph/ceph/blob/master/src/include/ceph_fs.h#L275-L285

No problem.


I have searched through documentation but I don't see anything
related. It's also not described / suggested in the part about upgrading
the MDS cluster (IMHO that would be a logical place) [1].

You're the first person I'm aware of asking for this. :)

Somehow I'm not suprised :-). But on the other hand, I am. I will try to explain this. Maybe this works best with an example.

Nautilus 14.2.4 cluster here (upgraded from luminous, mimic).

relevant part of ceph -s:

 mds: cephfs:1 {0=mds2=up:active} 1 up:standby-replay

^^ Two MDSes in this cluster: one active and one standby-replay.

ceph fs get cephfs |grep flags
flags   1e

Let's try to enable standby replay support:

ceph fs set cephfs allow_standby_replay true

That worked, did flags change?

ceph fs get cephfs |grep flags
flags   3e

Yes! But why? I would expect the support for standby replay to have been enabled already. How else would it work even without setting this fs feature. But apparently it does, and does not need this feature to be enabled like this. And that might explain that nobody ever wondered how to change "ceph fs flags" in the first place. Is that correct?

At this point I ask myself the question: who / what uses the cephfs flags, and what fort. Do I, as a storage admin, need to care about this at all?

But hey, here we are, and now I would like to understand it.

If, just for the sake of upgrading clusters to have identical features, I would like to "upgrade" the cephfs to support all ceph fs features, I seem not to be able to do that:

ceph fs set cephfs
Invalid command: missing required parameter var(max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client) fs set <fs_name> max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client <val> {--yes-i-really-mean-it} : set fs parameter <var> to <val>
Error EINVAL: invalid command

I can choose between these: max_mds|max_file_size|allow_new_snaps|inline_data|cluster_down|allow_dirfrags|balancer|standby_count_wanted|session_timeout|session_autoclose|allow_standby_replay|down|joinable|min_compat_client

Most of them I would not like to set at all (i.e down, joinable, max_mds) as they are not "features" but merely a way to set the ceph fs in a certain STATE.

So my question is: what do I need to enable to get an upgraded fs to get say "flags 12" (Nautilus with snapshot support enabled AFAIK). Is that at all possible?

The reason why I started this whole thread was to eliminate any Ceph config related difference between production and test. But maybe I should ask a different question: Does a (ceph-fuse / kernel) client use the *cephfs flags* bit at all? If not than we don't have to focus on this, and we can conclude we cannot reproduce the issue on our test environment.

I hope above makes sense to you ;-).

Thanks,

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux