On 12/15/2014 09:01 AM, Krishnan Parthasarathi wrote:
Here are a few questions that I had after reading the feature
page.
- Is there a new connection from glusterfsd (upcall xlator) to
a client accessing a file? If so, how does the upcall xlator reuse
connections when the same client accesses multiple files, or does it?
No. We are using the same connection which client initiates to send-in
fops. Thanks to you for pointing me initially to the 'client_t'
structure. As these connection details are available only in the server
xlator, I am passing these to upcall xlator by storing them in
'frame->root->client'.
- In the event of a network separation (i.e, a partition) between a client
and a server, how does the client discover or detect that the server
has 'freed' up its previously registerd upcall notification?
The rpc connection details of each client are stored based on its
client-uid. So incase of network partition, when client comes back
online, IMO it re-initiates the connection (along with new client-uid).
Please correct me if that's not the case. So there will new entries
created/added in this xlator. However, we still need to decide on how to
cleanup the old-timed-out and stale entries
* either clean-up the entries as and when we find any expired entry or
stale entry (in case if notification fails).
* or by spawning a new thread which periodically scans through this
list and cleans up those entries.
Thanks,
Soumya
----- Original Message -----
Hi,
This framework has been designed to maintain state in the glusterfsd
process, for each the files being accessed (including the clients info
accessing those files) and send notifications to the respective
glusterfs clients incase of any change in that state.
Few of the use-cases (currently identified) of this infrastructure are:
* Inode Update/Invalidation
- clients caching inode entries or attributes of a file (eg., md-cache)
can make use of this support to update or invalidate those entries based
on the notifications sent by server of the attributes changed on the
corresponding files in the back-end.
* Recall Delegations/lease-locks
- For lease-locks to be supported, the server should be able to
maintain those locks states and recall them successfully by sending
notifications to the clients incase of any conflicting access requested
by another client.
* Maintain Share Reservations/Locks states.
- Along with lease-lock states (mentioned above), we could extend this
framework to support Open Share-reservations/Locks
In addition to the above, this framework could be extended to add,
maintain any other file states and send callback events when required.
Feature page -
http://www.gluster.org/community/documentation/index.php/Features/Upcall-infrastructure
PoC on this feature is done. One of the initial consumers of this
support is NFS-ganesha (a user-mode file server for NFSv3,4.0,4.1,pNFS
developed by an active open-source community -
https://github.com/nfs-ganesha/nfs-ganesha/wiki)
Note: This support will be turned off by default and enabled only if
required by using a tunable option (currently using Gluster CLI option
to enable NFS-Ganesha, being developed as part of a different
feature that will get its Feature Page announced soon)
Comments and feedback are welcome.
Thanks,
Soumya
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel