On Thu, May 26, 2022 at 6:10 PM Xiubo Li <xiubli@xxxxxxxxxx> wrote: > > > On 5/27/22 8:44 AM, Jeff Layton wrote: > > On Fri, 2022-05-27 at 08:36 +0800, Xiubo Li wrote: > >> On 5/27/22 2:39 AM, Jeff Layton wrote: > >>> A question: > >>> > >>> How do the MDS's discover this setting? Do they get it from the mons? If > >>> so, I wonder if there is a way for the clients to query the mon for this > >>> instead of having to extend the MDS protocol? > >> It sounds like what the "max_file_size" does, which will be recorded in > >> the 'mdsmap'. > >> > >> While currently the "max_xattr_pairs_size" is one MDS's option for each > >> daemon and could set different values for each MDS. > >> > >> > > Right, but the MDS's in general don't use local config files. Where are > > these settings stored? Could the client (potentially) query for them? > > AFAIK, each process in ceph it will have its own copy of the > "CephContext". I don't know how to query all of them but I know there > have some API such as "rados_conf_set/get" could do similar things. > > Not sure whether will it work in our case. > > > > > I'm pretty sure the client does fetch and parse the mdsmap. If it's > > there then it could grab the setting for all of the MDS's at mount time > > and settle on the lowest one. > > > > I think a solution like that might be more resilient than having to > > fiddle with feature bits and such... > > Yeah, IMO just making this option to be like the "max_file_size" is more > appropriate. Makes sense to me — this is really a property of the filesystem, not a daemon, so it should be propagated through common filesystem state. I guess Luis' https://github.com/ceph/ceph/pull/46357 should be updated to do it that way? I see some discussion there about handling old clients which don't recognize these limits as well. -Greg