Hi Meghana, indeed, as I just reported to Soumya and Malahal too, the new version is indeed solving the problem. Many thanks to everyone for solving it! Alessandro On Fri, 2015-06-19 at 03:28 -0400, Meghana Madhusudhan wrote: > I see that NFS-Ganesha is no longer running after the unexport. Can you move to the latest sources which has Soumya's fix and try once? > Even if the config file is still included in the ganesha.conf, dynamically removing that export should not affect the process itself. > > Meghana > > ----- Original Message ----- > From: "Alessandro De Salvo" <Alessandro.DeSalvo@xxxxxxxxxxxxx> > To: "Meghana Madhusudhan" <mmadhusu@xxxxxxxxxx> > Cc: gluster-users@xxxxxxxxxxx, nfs-ganesha-devel@xxxxxxxxxxxxxxxxxxxxx, "Soumya Koduri" <skoduri@xxxxxxxxxx> > Sent: Thursday, June 18, 2015 7:55:30 PM > Subject: Re: [Nfs-ganesha-devel] Problems in /usr/libexec/ganesha/dbus-send.sh and ganesha dbus interface when disabling exports from gluster > > Hi Meghana, > > > Il giorno 18/giu/2015, alle ore 16:06, Meghana Madhusudhan <mmadhusu@xxxxxxxxxx> ha scritto: > > > > Hi Allesandro, > > > > I need the following output from you, > > > > 1. After you execute ganesha.enable on command. > > ps aux | grep ganesha > > root 6699 94.0 1.1 953488 193272 ? Ssl Jun17 1137:36 /usr/bin/ganesha.nfsd -L /var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT -p /var/run/ganesha.nfsd.pid > root 22263 0.0 0.0 112644 960 pts/0 S+ 16:14 0:00 grep --color=auto ganesha > > (both nodes of the cluster show the same, apart from the PIDs, of course) > > > showmount -e localhost > > Export list for node1: > /atlas-backup-01 (everyone) > > The same output is shown on the other node. > > > cat /etc/ganesha/ganesha.conf > > # Ganesha options file > %include "/etc/ganesha/ganesha-opts.conf" > %include "/etc/ganesha/exports/export.atlas-backup-01.conf" > > > The first include just point to a different file with the following lines, to change the rquota port > > NFS_Core_Param { > #Use a non-privileged port for RQuota > Rquota_Port = 4501; > } > > > > cat "/etc/ganesha/exports/export.VOLNAME.conf" or "/usr/..../export.VOLNAME.conf” > > # WARNING : Using Gluster CLI will overwrite manual > # changes made to this file. To avoid it, edit the > # file, copy it over to all the NFS-Ganesha nodes > # and run ganesha-ha.sh --refresh-config. > EXPORT{ > Export_Id= 2 ; > Path = "/atlas-backup-01"; > FSAL { > name = GLUSTER; > hostname="localhost"; > volume="atlas-backup-01"; > } > Access_type = RW; > Disable_ACL = true; > Squash="No_root_squash"; > Pseudo="/atlas-backup-01"; > Protocols = "3", "4" ; > Transports = "UDP","TCP"; > SecType = "sys"; > } > > > > tail /var/log/ganesha.log or wherever the ganesha log is. > > 17/06/2015 20:01:00 : epoch 5581b5da : node2 : ganesha.nfsd-26983[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully > 17/06/2015 20:01:00 : epoch 5581b5da : node2 : ganesha.nfsd-26983[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully > 17/06/2015 20:01:00 : epoch 5581b5da : node2 : ganesha.nfsd-26983[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully > 17/06/2015 20:01:00 : epoch 5581b5da : node2 : ganesha.nfsd-26983[reaper] nfs_in_grace :STATE :EVENT :NFS Server Now IN GRACE > 17/06/2015 20:01:00 : epoch 5581b5da : node2 : ganesha.nfsd-26983[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully > 17/06/2015 20:01:01 : epoch 5581b5da : node2 : ganesha.nfsd-26983[main] nfs_start :NFS STARTUP :EVENT :------------------------------------------------- > 17/06/2015 20:01:01 : epoch 5581b5da : node2 : ganesha.nfsd-26983[main] nfs_start :NFS STARTUP :EVENT : NFS SERVER INITIALIZED > 17/06/2015 20:01:01 : epoch 5581b5da : node2 : ganesha.nfsd-26983[main] nfs_start :NFS STARTUP :EVENT :------------------------------------------------- > 17/06/2015 20:02:00 : epoch 5581b5da : node2 : ganesha.nfsd-26983[reaper] nfs_in_grace :STATE :EVENT :NFS Server Now NOT IN GRACE > 18/06/2015 16:13:50 : epoch 5581b5da : node2 : ganesha.nfsd-26983[dbus_heartbeat] glusterfs_create_export :FSAL :EVENT :Volume atlas-backup-01 exported at : '/' > > > > > > 2. gluster vol set VOLNAME ganesha.enable off > > > > Output of the same as above. > > # gluster vol set atlas-backup-01 ganesha.enable off > volume set: failed: Dynamic export addition/deletion failed. Please see log file for details > > # ps aux | grep ganesha > root 32033 0.0 0.0 112640 960 pts/0 S+ 16:20 0:00 grep --color=auto ganesha > > [This is due to the fact that the export file was deleted by the config still references it, so ganesha fails to resume] > > # cat /etc/ganesha/ganesha.conf > # Ganesha options file > %include "/etc/ganesha/ganesha-opts.conf" > %include "/etc/ganesha/exports/export.atlas-backup-01.conf" > > # cat /etc/ganesha/exports/export.atlas-backup-01.conf > cat: /etc/ganesha/exports/export.atlas-backup-01.conf: No such file or directory > > > # tail /var/log/ganesha.log > 18/06/2015 16:19:20 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] glusterfs_create_export :FSAL :EVENT :Volume atlas-backup-01 exported at : '/' > 18/06/2015 16:19:22 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL > 18/06/2015 16:19:22 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] lower_my_caps :NFS STARTUP :EVENT :currenty set capabilities are: = cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap+ep > 18/06/2015 16:19:22 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs > 18/06/2015 16:19:22 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized > 18/06/2015 16:19:22 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs4_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 60 > 18/06/2015 16:19:22 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists > 18/06/2015 16:19:22 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (2:2) > 18/06/2015 16:19:22 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor. > 18/06/2015 16:19:22 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs_Start_threads :THREAD :EVENT :9P/TCP dispatcher thread was started successfully > 18/06/2015 16:19:22 : epoch 5582d368 : node2 : ganesha.nfsd-30835[_9p_disp] _9p_dispatcher_thread :9P DISP :EVENT :9P dispatcher started > 18/06/2015 16:19:23 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully > 18/06/2015 16:19:23 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully > 18/06/2015 16:19:23 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully > 18/06/2015 16:19:23 : epoch 5582d368 : node2 : ganesha.nfsd-30835[reaper] nfs_in_grace :STATE :EVENT :NFS Server Now IN GRACE > 18/06/2015 16:19:23 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully > 18/06/2015 16:19:23 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs_start :NFS STARTUP :EVENT :------------------------------------------------- > 18/06/2015 16:19:23 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs_start :NFS STARTUP :EVENT : NFS SERVER INITIALIZED > 18/06/2015 16:19:23 : epoch 5582d368 : node2 : ganesha.nfsd-30835[main] nfs_start :NFS STARTUP :EVENT :------------------------------------------------- > 18/06/2015 16:20:22 : epoch 5582d3a6 : node2 : ganesha.nfsd-31542[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version /root/rpmbuild/BUILD/nfs-ganesha-2.2.0-3-0.1.1-Source, built at Jun 15 2015 22:13:18 on node2 > 18/06/2015 16:20:23 : epoch 5582d3a7 : node2 : ganesha.nfsd-31550[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version /root/rpmbuild/BUILD/nfs-ganesha-2.2.0-3-0.1.1-Source, built at Jun 15 2015 22:13:18 on node2 > 18/06/2015 16:20:23 : epoch 5582d3a7 : node2 : ganesha.nfsd-31557[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version /root/rpmbuild/BUILD/nfs-ganesha-2.2.0-3-0.1.1-Source, built at Jun 15 2015 22:13:18 on node2 > 18/06/2015 16:20:23 : epoch 5582d3a7 : node2 : ganesha.nfsd-31562[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version /root/rpmbuild/BUILD/nfs-ganesha-2.2.0-3-0.1.1-Source, built at Jun 15 2015 22:13:18 on node2 > 18/06/2015 16:20:23 : epoch 5582d3a7 : node2 : ganesha.nfsd-31567[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version /root/rpmbuild/BUILD/nfs-ganesha-2.2.0-3-0.1.1-Source, built at Jun 15 2015 22:13:18 on node2 > 18/06/2015 16:20:49 : epoch 5582d3c1 : node2 : ganesha.nfsd-31908[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version /root/rpmbuild/BUILD/nfs-ganesha-2.2.0-3-0.1.1-Source, built at Jun 15 2015 22:13:18 on node2 > 18/06/2015 16:20:49 : epoch 5582d3c1 : node2 : ganesha.nfsd-31932[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version /root/rpmbuild/BUILD/nfs-ganesha-2.2.0-3-0.1.1-Source, built at Jun 15 2015 22:13:18 on node2 > 18/06/2015 16:20:49 : epoch 5582d3c1 : node2 : ganesha.nfsd-31949[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version /root/rpmbuild/BUILD/nfs-ganesha-2.2.0-3-0.1.1-Source, built at Jun 15 2015 22:13:18 on node2 > 18/06/2015 16:20:49 : epoch 5582d3c1 : node2 : ganesha.nfsd-31956[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version /root/rpmbuild/BUILD/nfs-ganesha-2.2.0-3-0.1.1-Source, built at Jun 15 2015 22:13:18 on node2 > 18/06/2015 16:20:49 : epoch 5582d3c1 : node2 : ganesha.nfsd-31971[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version /root/rpmbuild/BUILD/nfs-ganesha-2.2.0-3-0.1.1-Source, built at Jun 15 2015 22:13:18 on node2 > > > > I have not enabled the FULL_DEBUG but if you need it I can do it. > > > > > Thanks for your perseverance :) > > I have to thank you for the help! :-) > Cheers, > > Alessandro > > > > > Meghana > > > > ----- Original Message ----- > > From: "Alessandro De Salvo" <Alessandro.DeSalvo@xxxxxxxxxxxxx> > > To: "Meghana Madhusudhan" <mmadhusu@xxxxxxxxxx> > > Cc: gluster-users@xxxxxxxxxxx, nfs-ganesha-devel@xxxxxxxxxxxxxxxxxxxxx, "Soumya Koduri" <skoduri@xxxxxxxxxx> > > Sent: Thursday, June 18, 2015 7:24:55 PM > > Subject: Re: [Nfs-ganesha-devel] Problems in /usr/libexec/ganesha/dbus-send.sh and ganesha dbus interface when disabling exports from gluster > > > > Hi Meghana, > > > >> Il giorno 18/giu/2015, alle ore 07:04, Meghana Madhusudhan <mmadhusu@xxxxxxxxxx> ha scritto: > >> > >> > >> > >> > >> On 06/17/2015 10:57 PM, Alessandro De Salvo wrote: > >>> Hi, > >>> when disabling exports from gluster 3.7.1, by using gluster vol set <volume> ganesha.enable off, I always get the following error: > >>> > >>> Error: Dynamic export addition/deletion failed. Please see log file for details > >>> > >>> This message is produced by the failure of /usr/libexec/ganesha/dbus-send.sh, and in fact if I manually perform the command to remove the share I see: > >> you got it wrong. '/usr/libexec/ganesha/dbus-send.sh' is used by > >> Gluster-CLI to unexport the volume "gluster volume set <volname> > >> ganesha.enable off" which rightly deletes the export file too while > >> un-exporting the volume. > >> > >>> > >>> # dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.RemoveExport uint16:2 > >>> Error org.freedesktop.DBus.Error.NoReply: Message did not receive a reply (timeout by message bus) > >>> > >>> So, there is a timeout and it fails completely. > >> Check if nfs-ganesha is still running. There was a bug in unexporting > >> the volume. Its been fixed recently in V2.3-dev, yet to be back-ported > >> to V2.2-stable branch. > >> https://review.gerrithub.io/#/c/236129/ > >> > >> Thanks, > >> Soumya > >> > >>> In this case I think there is a bug in /usr/libexec/ganesha/dbus-send.sh, since it blindly deletes the share config if the RemoveExport fails (function check_cmd_status()), but leaves the %include inside ganesha.conf as in the check_cmd_status() there is a runaway condition and the other removal statements are then not executed. I believe the logic should be fixed here, otherwise even a restart of the service will fail due to the bad configuration. > >> > >> Yes. I see that the "sed -i /$VOL.conf/d $CONF" is placed after the check_cmd_status. I shall send a fix upstream in a related bug. But dynamic export removal > >> will fail in three cases, > >> 1. nfs-ganesha is not running. > > > > no, it was running > > > >> 2. The export file that is particular to that volume is somehow deleted before you perform the removal. It does depend on that file to get the export ID. > > > > I tried to comment out the rm in check_cmd_status to avoid this race condition, but it did not solve the problems. > > > >> 3. The bug that Soumya pointed out. > > > > This might well be the real cause! > > > >> > >> If it is failing consistently, there could be something that you are missing. If you can send the exact sequence of sequence of steps that you have executed, > >> I can help you with it. > > > > Yes, it’s failing consistently, unless as I said I do a DisplayExport before the RemoveExport, in which case it always works. > > > >> > >> Ideally after exporting a particular volume, you'll see an entry in the /etc/ganesha/ganesha.conf file and the export file in "/etc/ganesha/exports" dir. > > > > And this works perfectly, I see them correctly. > > > >> If you have this in place and nfs-ganesha running, then dynamic export removal should work just fine. > > > > But this is not, at least in my case. > > The command I’m using are just the following: > > > > gluster vol set <volume> ganesha.enable on > > gluster vol set <volume> ganesha.enable off > > > > It normally wait a few seconds between the two commands, to give time to ganesha to actually export the volume. > > The export is always failing as described, unless I add the DisplayExport in dbus-send.sh before RemoveExport. > > Many thanks for the help, > > > > Alessandro > > > > > >> > >> Meghana > >> > >> > >>> > >>> What’s more worrying is the problem with the dbus. Issuing a DisplayExport before the RemoveExport apparently fixes the problem, so something like this always works: > >>> > >>> # dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.DisplayExport uint16:2 > >>> # dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.RemoveExport uint16:2 > >>> > >>> So, it’s like the DisplayExport is forcing someway a refresh that is needed by the RemoveExport. Any idea why? > >>> I’m using the latest version of ganesha 2.2.0, i.e. 2.2.0-3. > >>> Thanks, > >>> > >>> Alessandro > >>> > >>> PS: sorry for reporting so many things in a few days :-) > >>> > >>> > >>> > >>> ------------------------------------------------------------------------------ > >>> > >>> > >>> > >>> _______________________________________________ > >>> Nfs-ganesha-devel mailing list > >>> Nfs-ganesha-devel@xxxxxxxxxxxxxxxxxxxxx > >>> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel > >>> > >> > >> ------------------------------------------------------------------------------ > >> _______________________________________________ > >> Nfs-ganesha-devel mailing list > >> Nfs-ganesha-devel@xxxxxxxxxxxxxxxxxxxxx > >> https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel > >> > > > > > > _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users