Hi Cephers, I am building NFS GW for our test Ceph cluster based on Ceph Pacific 16.2.7. I have to use manual installation, so my system doesn't have an orchestrator enabled in the cluster. I have 3 MDS, 6 OSD, 2 RadosGW and 2 NFS GW nodes. All components including RadosGW dashboard works fine, but the NFS page returns the "page not found" message and also additional notification: "500 - Internal Server Error The server encountered an unexpected condition which prevented it from fulfilling the request." Ceph-mgr logs show the next messages: 2022-02-01T12:51:06.067+0100 7fb6a4d5e700 -1 Remote method threw exception: Traceback (most recent call last): [5/1714] File "/usr/share/ceph/mgr/nfs/module.py", line 154, in cluster_ls return available_clusters(self) File "/usr/share/ceph/mgr/nfs/utils.py", line 19, in available_clusters completion = mgr.describe_service(service_type='nfs') File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1377, in inner completion = self._oremote(method_name, args, kwargs) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1443, in _oremote raise NoOrchestrator() orchestrator._interface.NoOrchestrator: No orchestrator configured (try `ceph orch set backend`) 2022-02-01T12:51:06.071+0100 7fb6a4d5e700 0 [dashboard ERROR exception] Internal Server Error Traceback (most recent call last): File "/usr/share/ceph/mgr/dashboard/services/exception.py", line 46, in dashboard_exception_handler return handler(*args, **kwargs) File "/lib/python3/dist-packages/cherrypy/_cpdispatch.py", line 60, in __call__ return self.callable(*self.args, **self.kwargs) File "/usr/share/ceph/mgr/dashboard/controllers/_base_controller.py", line 258, in inner ret = func(*args, **kwargs) File "/usr/share/ceph/mgr/dashboard/controllers/nfs.py", line 99, in status mgr.remote('nfs', 'cluster_ls') File "/usr/share/ceph/mgr/mgr_module.py", line 1770, in remote args, kwargs) RuntimeError: Remote method threw exception: Traceback (most recent call last): File "/usr/share/ceph/mgr/nfs/module.py", line 154, in cluster_ls return available_clusters(self) File "/usr/share/ceph/mgr/nfs/utils.py", line 19, in available_clusters completion = mgr.describe_service(service_type='nfs') File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1377, in inner completion = self._oremote(method_name, args, kwargs) File "/usr/share/ceph/mgr/orchestrator/_interface.py", line 1443, in _oremote raise NoOrchestrator() orchestrator._interface.NoOrchestrator: No orchestrator configured (try `ceph orch set backend`) 2022-02-01T12:51:06.071+0100 7fb6a4d5e700 0 [dashboard ERROR request] [::ffff:10.41.172.109:63087] [GET] [500] [0.010s] [admin] [2.0K] /api/nfs-ganesha/status 2022-02-01T12:51:06.071+0100 7fb6a4d5e700 0 [dashboard ERROR request] [b'{"status": "500 Internal Server Error", "detail": "The server encountered an unexpected condition which prevented it from fulfilling the request.", "request_id": "83726252-5fe3-4e8d-b55e-e4ca4045fc11", "traceback": "Traceback (most recent call last):\\n File \\"/lib/python3/dist-packages/cherrypy/_cprequest.py\\", line 670, in respond\\n response.body = self.handler()\\n File \\"/lib/python3/dist-packages/cherrypy/lib/encoding.py\\", line 220, in __call__\\n self.body = self.oldhandler(*args, **kwargs)\\n File \\"/lib/python3/dist-packages/cherrypy/_cptools.py\\", line 237, in wrap\\n return self.newhandler(innerfunc, *args, **kwargs)\\n File \\"/usr/share/ceph/mgr/dashboard/services/exception.py\\", line 55, in dashboard_exception_handler\\n raise error\\n File \\"/usr/share/ceph/mgr/dashboard/services/exception.py\\", line 46, in dashboard_exception_handler\\n return handler(*args, **kwargs)\\n File \\"/lib/python3/dist-packages/cherrypy/_cpdispatch.py\\", line 60, in __call__\\n return self.callable(*self.args, **self.kwargs)\\n File \\"/usr/share/ceph/mgr/dashboard/controllers/_base_controller.py\\", line 258, in inner\\n ret = func(*args, **kwargs)\\n File \\"/usr/share/ceph/mgr/dashboard/controllers/nfs.py\\", line 99, in status\\n mgr.remote(\'nfs\', \'cluster_ls\')\\n File \\"/usr/share/ceph/mgr/mgr_module.py\\", line 1770, in remote\\n args, kwargs)\\nRuntimeError: Remote method threw exception: Traceback (most recent call last):\\n File \\"/usr/share/ceph/mgr/nfs/module.py\\", line 154, in cluster_ls\\n return available_clusters(self)\\n File \\"/usr/share/ceph/mgr/nfs/utils.py\\", line 19, in available_clusters\\n completion = mgr.describe_service(service_type=\'nfs\')\\n File \\"/usr/share/ceph/mgr/orchestrator/_interface.py\\", line 1377, in inner\\n completion = self._oremote(method_name, args, kwargs)\\n File \\"/usr/share/ceph/mgr/orchestrator/_interface.py\\", line 1443, in _oremote\\n raise NoOrchestrator()\\norchestrator._interface.NoOrchestrator: No orchestrator configured (try `ceph orch set backend`)\\n\\n", "version": "8.9.1"}'] 2022-02-01T12:51:06.071+0100 7fb6a4d5e700 0 [dashboard INFO request] [::ffff:10.41.172.109:63087] [GET] [500] [0.011s] [admin] [2.0K] /api/nfs-ganesha/status The ceph cluster has a normal state and NFS nodes are visible and working: # ceph -s cluster: id: 3ebc4036-d550-46a6-a970-ba8386b763e9 health: HEALTH_WARN Dashboard debug mode is enabled services: mon: 3 daemons, quorum nlut-unixcephmon01,nlut-unixcephmon02,nlut-unixcephmon03 (age 11d) mgr: nlut-unixcephmon01(active, since 3d), standbys: nlut-unixcephmon03, nlut-unixcephmon02 osd: 6 osds: 6 up (since 6d), 6 in (since 6d) rgw: 4 daemons active (4 hosts, 1 zones) rgw-nfs: 2 daemons active (2 hosts, 1 zones) data: pools: 9 pools, 257 pgs objects: 297 objects, 7.7 KiB usage: 5.4 GiB used, 295 GiB / 300 GiB avail pgs: 257 active+clean NFS ganesha configuration file: EXPORT { Export_ID = 100; Path = "/"; Pseudo = "/"; Access_Type = RW; SecType = "sys"; NFS_Protocols = 4; Transport_Protocols = TCP; Squash = No_Root_Squash; FSAL { Name = RGW; User_Id = "nfs"; Access_Key_Id = "*****"; Secret_Access_Key = "****"; } } RGW { name = "client.rgw.cephnfs02"; ceph_conf = "/etc/ceph/ceph.conf"; init_args = "--{arg}={arg-value}"; Could you please advise if it's possible to set up a dashboard for NFS gateways? Thank you in advance. Aleksandr. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx