Re: Upcalls Infrastructure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> >
> > - Is there a new connection from glusterfsd (upcall xlator) to
> >    a client accessing a file? If so, how does the upcall xlator reuse
> >    connections when the same client accesses multiple files, or does it?
> >
> No. We are using the same connection which client initiates to send-in
> fops. Thanks to you for pointing me initially to the 'client_t'
> structure. As these connection details are available only in the server
> xlator, I am passing these to upcall xlator by storing them in
> 'frame->root->client'.
> 
> > - In the event of a network separation (i.e, a partition) between a client
> >    and a server, how does the client discover or detect that the server
> >    has 'freed' up its previously registerd upcall notification?
> >
> The rpc connection details of each client are stored based on its
> client-uid. So incase of network partition, when client comes back
> online, IMO it re-initiates the connection (along with new client-uid).

How would a client discover that a server has purged its upcall entries?
For instance, a client could assume that the server would notify it about
changes as before (while the server has purged the client's upcall entries)
and assume that it still holds the lease/lock. How would you avoid that?

> Please correct me if that's not the case. So there will new entries
> created/added in this xlator. However, we still need to decide on how to
> cleanup the old-timed-out and stale entries
> 	* either clean-up the entries as and when we find any expired entry or
> stale entry (in case if notification fails).
> 	* or by spawning a new thread which periodically scans through this
> list and cleans up those entries.

There are couple of things to resource cleanup in this context.
1) Time to cleanup; For e.g, on expiry of a timer.
2) Order of cleaning up; This involves clearly establishing relationships
   among inode, upcall entry and client_t(s). We should document this.
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux