[nfs-ganesha RFC PATCH v2 05/13] SAL: add new try_lift_grace recovery operation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Jeff Layton <jlayton@xxxxxxxxxx>

When running in a clustered environment, we can't just lift the grace
period once the local machine is ready. We must instead wait until no
other cluster nodes still need it.

Add a new try_lift_grace op, and use that to do extra vetting before
allowing the local grace period to be lifted. If it returns true, then
we can go ahead and lift the grace period.

Change-Id: Ic8060a083ac9d8581d78357ab7f1351793625264
Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx>
---
 src/SAL/nfs4_recovery.c     | 16 +++++++++++-----
 src/include/sal_functions.h |  2 ++
 2 files changed, 13 insertions(+), 5 deletions(-)

diff --git a/src/SAL/nfs4_recovery.c b/src/SAL/nfs4_recovery.c
index f88d1d187f45..120819e621c9 100644
--- a/src/SAL/nfs4_recovery.c
+++ b/src/SAL/nfs4_recovery.c
@@ -204,6 +204,7 @@ void nfs_try_lift_grace(void)
 	int32_t rc_count = 0;
 	time_t current = atomic_fetch_time_t(&current_grace);
 
+	/* Already lifted? Just return */
 	if (!current)
 		return;
 
@@ -221,13 +222,18 @@ void nfs_try_lift_grace(void)
 					time(NULL));
 
 	/*
-	 * Can we lift the grace period now? If so, take the grace_mutex and
-	 * try to do it.
+	 * Can we lift the grace period now? Clustered backends may need
+	 * extra checks before they can do so. If that is the case, then take
+	 * the grace_mutex and try to do it. If the backend does not implement
+	 * a try_lift_grace operation, then we assume it's always ok.
 	 */
 	if (!in_grace) {
-		PTHREAD_MUTEX_lock(&grace_mutex);
-		nfs_lift_grace_locked(current);
-		PTHREAD_MUTEX_unlock(&grace_mutex);
+		if (!recovery_backend->try_lift_grace ||
+		     recovery_backend->try_lift_grace()) {
+			PTHREAD_MUTEX_lock(&grace_mutex);
+			nfs_lift_grace_locked(current);
+			PTHREAD_MUTEX_unlock(&grace_mutex);
+		}
 	}
 }
 
diff --git a/src/include/sal_functions.h b/src/include/sal_functions.h
index 7563b021af22..7e30e51eeabf 100644
--- a/src/include/sal_functions.h
+++ b/src/include/sal_functions.h
@@ -975,6 +975,7 @@ void blocked_lock_polling(struct fridgethr_context *ctx);
 
 void nfs_start_grace(nfs_grace_start_t *gsp);
 bool nfs_in_grace(void);
+bool simple_try_lift_grace(void);
 void nfs_try_lift_grace(void);
 void nfs4_add_clid(nfs_client_id_t *);
 void nfs4_rm_clid(nfs_client_id_t *);
@@ -1022,6 +1023,7 @@ struct nfs4_recovery_backend {
 	void (*add_clid)(nfs_client_id_t *);
 	void (*rm_clid)(nfs_client_id_t *);
 	void (*add_revoke_fh)(nfs_client_id_t *, nfs_fh4 *);
+	bool (*try_lift_grace)(void);
 };
 
 void fs_backend_init(struct nfs4_recovery_backend **);
-- 
2.17.0

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux