Hi,
In certain scenarios (esp.,in highly available environments), the
application may have to fail-over/connect to a different glusterFS
client while the I/O is happening. In such cases until there is a ping
timer expiry and glusterFS server cleans up the locks held by the older
glusterFS client, the application will not be able to reclaim their lost
locks. To avoid that we need support in Gluster to let clients reclaim
the existing locks provided lkwoner and the lock range matches.
One of the applications which shall benefit from this support is
NFS-Ganesha. NFS clients try to reclaim their post server reboot.
I have made relevant changes (WIP) on the server side to have this
support [1]. The changes include -
* A new CLI option is provided "features.locks-reclaim-lock" to enable
this support.
* Assuming below is done on the client-side (gfapi) - TODO
While re-trying the lock request, application has to notify the
glusterFS client that it is a reclaim request. Client on receiving such
request should set a boolean "reclaim-lock" in the xdata passed to lock
request.
* On the server-side -
- A new field 'reclaim' is added to 'posix_lock_t' to note if it is
to be reclaimed.
- While processing LOCK fop, if the "reclaim-lock" is set in the
xdata received, reclaim field will be enabled in the new posix lock created.
- While checking for conflicting locks (in 'same_owner()'), if the
reclaim field is set, comparison will be done for lkowner and lock
ranges instead of comparing both lkwoner and client UID.
- Later it will fall through '__insert_and_merge' and the old lock
will be updated with the details of the new lock created (along with
client details).
For client-side support, I am thinking if we can integrate with the new
lock API being introduced as part of mandatory lock support in gfapi [2]
Kindly take a look and provide your comments/suggestions.
The changes seemed minimal and hence I haven't added it as 3.9 release
feature. But if you feel it is a feature candidate, please let me know.
I shall open up a feature page.
Thanks,
Soumya
[1] http://review.gluster.org/#/c/14986/
[2] http://review.gluster.org/11177
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel