Hi all,
Currently, there are two ways to create exports with mgr/volume/nfs module and
dashboard. Both use the same code[1][2] with modification to create exports.
Recently, there was a meeting to discuss integration of dashboard with
volume/nfs module. A number of todo items were identified.
dashboard. Both use the same code[1][2] with modification to create exports.
Recently, there was a meeting to discuss integration of dashboard with
volume/nfs module. A number of todo items were identified.
Below provides a brief description of export creation workflow:
1) mgr/volume/nfs module [3]
* It was introduced in octopus.
* It automates the pool and cluster creation with "ceph nfs cluster create" command.
* Currently using 'cephadm' as backend. In future 'rook' will also be supported.
* Default exports can be created with
'ceph nfs export create cephfs <fsname> <clusterid> <binding> [--readonly] [--path=/path/in/cephfs]'.
Otherwise `ceph nfs cluster config set <clusterid> -i <config_file>` command
can be used to create user defined exports. Even modify ganesha configuration.
'ceph nfs export create cephfs <fsname> <clusterid> <binding> [--readonly] [--path=/path/in/cephfs]'.
Otherwise `ceph nfs cluster config set <clusterid> -i <config_file>` command
can be used to create user defined exports. Even modify ganesha configuration.
* RGW exports are not supported [4]. We need someone to help with it.
* Exports can be listed, fetched and deleted. But cannot be modified currently [5].
* Only NFSv4 is supported. It provides better cache management, parallelism,
compound operations, and lease based locks than previous versions.
compound operations, and lease based locks than previous versions.
2) Dashboard[6]
* The pool and nfs cluster needs to be created explicitly.
* Also requires the
"ceph dashboard set-ganesha-clusters-rados-pool-namespace <pool_name>[/<namespace>]"command to be used before exports can be created. And following options need to
be specified: cluster id, daemons, path, pseudo path, access type, squash,
security label, protocols [3, 4], transport [udp, tcp], cephfs user id, cephfs name.
* It supports both cephfs and rgw exports.
* Exports can be modified, listed, fetched and deleted.
* Available from nautilus.
We would like to create a common code base for it and eventually go in a
direction where the dashboard may use the volumes/nfs plugin for configuring
NFS clusters.
direction where the dashboard may use the volumes/nfs plugin for configuring
NFS clusters.
These are the issues we identified in our meeting:
* Difference in user workflow between volume/nfs and dashboard.
* rgw exports need to be supported in volume/nfs module.
* Dashboard does not want to depend on the orchestrator in future for fetching
cluster pool and namespace.
cluster pool and namespace.
* Dashboard creates config object per daemon containing export object rados url.
* In cephadm all daemons within the cluster watch a single config object. This
config object contains rados url for export objects.
config object contains rados url for export objects.
rados://$pool/$namespace/export-$i rados://$pool/$namespace/userconf-nfs.$svc
(export config) (user defined config)
+-----------+ +----------- + +-----------+ +---------+
| | | | | | | |
| export-1 | | export-2 | | export-3 | | export |
| | | | | | | |
+----+----+ +-----+-----+ +-----+----+ +----+----+
^ ^ ^ ^
| | | |
+-----------------+----------------+-----------------+
%url |
|
+---------+---------+
| | rados://$pool/$namespace/conf-nfs.$svc
| conf+nfs.$svc | (common config)
| |
+---------+----------+
^
|
watch_url |
+-----------------------------------------------+
| | |
| | | RADOS
+----------------------------------------------------------------------------------+
| | | CONTAINER
watch_url | watch_url | watch_url |
| | |
+--------+-------+ +--------+-------+ +-------+--------+
| | | | | | /etc/ganesha/ganesha.conf
| nfs.$svc.a | | nfs.$svc.b | | nfs.$svc.c | (bootstrap config)
| | | | | |
+----------------+ +----------------+ +------------------+
In our next meeting, we’d like to decide on a way forward for reconciling these issues.
Thanks,
Varsha
_______________________________________________ Dev mailing list -- dev@xxxxxxx To unsubscribe send an email to dev-leave@xxxxxxx