Re: how should we manage ganesha's export tables from ceph-mgr ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2019-01-03 at 13:19 -0800, Patrick Donnelly wrote:
> Hi Jeff,
> 
> Thanks for writing this up. A few comments in-line:
> 
> On Thu, Jan 3, 2019 at 11:05 AM Jeff Layton <jlayton@xxxxxxxxxx> wrote:
> > [...]
> > My thinking was that we'd probably want to create a new mgr module for
> > that, and could wire it up to the command line with something like:
> > 
> >     $ ceph nfsexport create --id=100                    \
> >                         --pool=mypool                   \
> >                         --namespace=mynamespace         \
> 
> I view these three options above as required so I'm wondering if it
> should just be either a single positional argument or required option.
> Then it would just be formatted as the rados url:
> rados:://pool/namespace/100
> 

Good idea, squashing those 3 fields into a --url option might be best.

> I'd also suggest reversing the object names (if it's not too late for
> that) so it sorts better. Objects could be named "100/{conf,exports}".
> 
> To be clear, the purpose of this command is solely to setup the RADOS
> objects storing the conf/exports for the Ganesha cluster?
> 

Definitely not too late to change the naming format. None of the
dependent pieces are merged yet, and I haven't written any code for this
just yet.

I'm curious though -- why would you want to put the unique bit first?

> >  [...]
> > From there, we'd need to also be able to "link" and "unlink" these
> > export objects into the config files for each daemon. So if I have a
> > cluster of 2 servers with nodeids "a" and "b":
> > 
> >     $ ceph nfsexport link --pool=mypool                 \
> >                         --namespace=mynamespace         \
> >                         --id=100                        \
> >                         --node=a                        \
> >                         --node=b
> > 
> > ...with a corresponding "unlink" command. That would append objects
> > called "conf-a" and "conf-b" with this line:
> 
> I got lost here. Assuming we're starting two Ganesha servers for a
> single cluster (active=2), wouldn't they have the same export block
> (i.e. export-100)? And, wouldn't that export block already be "linked"
> into the config (conf-100)?
> 
> 

No, it goes something like this:

Each ganesha daemon started by rook gets its own "supplemental" config
file. So we end up with objects that look like this:

For ganesha node "a" - rados://mypool/mynamespace/conf-a
For ganesha node "b" - rados://mypool/mynamespace/conf-b

Rook's ganesha CRD creates empty objects if these don't exist when the
daemon is first started, but we can pre-populate them too.

In most cases, these files will likely be identical across the cluster,
but allowing for a different config per-node allows us eventually use
migration to help shift clients off of a server prior to downsizing the
cluster.

So, the real question is what should go in the conf-* objects?

I was proposing that we put each EXPORT block into separate export-<id>
objects, and then just add a series of %url directives to the conf-*
objects to slurp each one in.

That said, there are many ways to set this up so I'm definitely open to
suggestion here.

> (You've spoken in the past about chaining export blocks too but I'm
> not convinced we want that.)

Ganesha can chain them, but I agree it's better to keep the hierarchy as
flat as possible.

> >     %url rados://mypool/mynamespace/export-100
> > 
> > ...and then call into the orchestrator to send a SIGHUP to the daemons
> > to make them pick up the new configs. We might also want to sanity check
> > whether any conf-* files are still linked to the export-* files before
> > removing those objects.
> 
> 

-- 
Jeff Layton <jlayton@xxxxxxxxxx>




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux