Re: Using ceph.conf for CephFS kernel client with Nautilus cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2022-02-03 at 16:52 +0100, William Edwards wrote:
> Hi,
> 
> Jeff Layton schreef op 2022-02-03 15:36:
> > On Thu, 2022-02-03 at 15:26 +0100, William Edwards wrote:
> > > Hi,
> > > 
> > > Jeff Layton schreef op 2022-02-03 14:45:
> > > > On Thu, 2022-02-03 at 12:01 +0100, William Edwards wrote:
> > > > > Hi,
> > > > > 
> > > > > I need to set options from
> > > > > https://docs.ceph.com/en/nautilus/cephfs/client-config-ref/ . I assume
> > > > > these should be placed in the 'client' section in ceph.conf.
> > > > > 
> > > > > The documentation for Nautilus says that ceph.conf should be placed
> > > > > when
> > > > > FUSE is used, see:
> > > > > https://docs.ceph.com/en/nautilus/cephfs/mount-prerequisites/ .
> > > > > However,
> > > > > ceph.conf is not mentioned on
> > > > > https://docs.ceph.com/en/nautilus/cephfs/fstab/#kernel-driver .
> > > > > Therefore, the clients don't currently have an /etc/ceph/ceph.conf.
> > > > > 
> > > > > In contrast, the documentation for Pacific says that there **must** be
> > > > > a
> > > > > ceph.conf in any case: https://docs.ceph.com/en/latest/cephfs/mount
> > > > > -prerequisites/#general-pre-requisite-for-mounting-cephfs
> > > > > 
> > > > > Newer Ceph versions contain the command 'ceph config
> > > > > generate-minimal-conf'. I can deduce from the command's code what
> > > > > ceph.conf on the client should look like:
> > > > > https://github.com/ceph/ceph/blob/master/src/mon/ConfigMonitor.cc#L423
> > > > > 
> > > > > L428: [global]
> > > > > L429: fsid
> > > > > L430 - L448: mon_host (not sure what 'is_legacy' and 'size() == 1'
> > > > > entail; I guess I'll see)
> > > > > L449: newline
> > > > > L450 - L458: This is deduced from
> > > > > https://github.com/ceph/ceph/blob/a67d1cf2a7a4031609a5d37baa01ffdfef80e993/src/mon/ConfigMap.cc#L98
> > > > > . get_minimal_conf only adds options with the flags FLAG_NO_MON_UPDATE
> > > > > or FLAG_MINIMAL_CONF, but I don't see any 'set_flags' statements in
> > > > > master; so I'm not sure which options have those flags.
> > > > > 
> > > > > So the resulting config would contain the global section with 'fsid'
> > > > > and
> > > > > 'mon_host', my custom options in 'client', and possibly 'keyring'.
> > > > > 
> > > > > Questions:
> > > > > 
> > > > > - Is it acceptable to use a ceph.conf on the kernel client when using
> > > > > a
> > > > > Nautilus cluster? It can be specified as the 'conf' mount option, but
> > > > > as
> > > > > the documentation barely mentions it for kernel clients, I'm not 100%
> > > > > sure.
> > > > > - Is my evaluation of the 'minimal' config correct?
> > > > > - Which options have the FLAG_NO_MON_UPDATE and FLAG_MINIMAL_CONF
> > > > > flags?
> > > > > / Where are flags set?
> > > > > 
> > > > > The cluster is running Ceph 14.2.22. The clients are running Ceph
> > > > > 12.2.11. All clients use the kernel client.
> > > > > 
> > > > 
> > > > The in-kernel client itself does not pay any attention to ceph.conf.
> > > > The
> > > > mount helper program (mount.ceph) will look at that ceph configs and
> > > > keyrings to search for mon addresses and secrets for mounting if you
> > > > don't provide them in the device string and mount options.
> > > 
> > > Are you saying that the options from
> > > https://docs.ceph.com/en/nautilus/cephfs/client-config-ref/ won't take
> > > effect when using the kernel client?
> > > 
> > 
> > Yes. Those are ignored by the kernel client.
> 
> Thanks. I was hoping to set 'client cache size'. Is there any other way 
> to set it when using the kernel client? I doubt switching to FUSE will 
> help in solving the performance issue I'm trying to tackle (which is 
> what I want to set 'client cache size' for :-) ).
> 

No, not really.

We don't limit the amount of pagecache in use by a particular mount in
the kernel. If you want to limit the amount of pagecache in use, then
you have to tune generic VM settings like the /proc/sys/vm/dirty_*
settings.

Alternately you can investigate cgroups if you want to limit the amount
of memory a particular application is allowed to dirty at a time.
-- 
Jeff Layton <jlayton@xxxxxxxxxx>

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux