On Mon, Dec 14, 2015 at 12:43:08AM -0500, Poornima Gurusiddaiah wrote: > > Just to sum up, the proposed solutions were: > - Use compound fops > - Use client_t for storing lease id, lk owner instead of thread local storage > - Use glfd, and glfs_object to store the lease id instead of thread local storage > - Add new APIs to take 2 other arguments. > > Using compound fops seems like a good idea, but compound fops implementation > in libgfapi is not planned as of yet for 3.8, and also even for using compound fops > we would anyways need new API to set the lease ID and lkowner. The compound fop > infra can provide some shared storage that is local to all the compound fops > of the same set. > > As there is no, one to one mapping between client_t and the leaseid/lk_owner, > it is as good as storing in some local storage and using mutex to co-ordinte. > > glfd is used to store the lease ID and lk_owner in the current patchset, this works > very well for Samba, but for NFS Ganesha, glfs_object is shared across clients, > hence glfs_object cannot be used to store the lease_id and lk_owner. > > Adding new API makes it clean, but adds a huge overhead of maintenance and possible > code duplication. Replacing the existing APIs means forcing the existing applications > to rewrite even if they are not using lease or lock feature. > Since every API unsets the lease ID and lk_owner, there cannot be a stale > lease_id/lk_owner. Are there any other issues with using pthread local storage? > If pthread key resource usage is a problem we could replace all the key to one > gluster key and store a struct in the gluster key. > > Hence, my suggestion would be to add new APIs which use thread local storage and once > the compound fops are implemented in libgfapi use their infra to store lease_id > and lk_owner instead of thread local storage. How about adding a compound-FOP structure and API to gfapi already? The execution can do multiple non-compound FOPs serially. It gives us the opportunity to prepare for a compound-FOP interface. I think that is a pretty clean approach, and makes it possible to have a non-racy lk_owner API. Once real compound FOPs are possible, we just replace the internal gfapi implementation and with no need to adjust applications that are using the new lk_owner API. This would let us implement the top-layer of gfapi once, without the need to add temporary lk_owner functionality. Thoughts? Niels > > Regards, > Poornima > > ----- Original Message ----- > > From: "Jeff Darcy" <jdarcy@xxxxxxxxxx> > > To: "Niels de Vos" <ndevos@xxxxxxxxxx>, "Ira Cooper" <ira@xxxxxxxxxx> > > Cc: "Gluster Devel" <gluster-devel@xxxxxxxxxxx> > > Sent: Saturday, December 5, 2015 12:04:29 AM > > Subject: Re: libgfapi changes to add lk_owner and lease ID > > > > > > > > > > On December 4, 2015 at 8:25:10 AM, Niels de Vos (ndevos@xxxxxxxxxx) wrote: > > > Okay, so you meant to say that client_t "is a horror show" for this > > > particular use-case (lease_id). It indeed does not sound suitable to use > > > client_t here. > > > > > > I'm not much of a fan for using Thread Local Storage and would prefer to > > > see a more close to atomic way like my suggestion for compound > > > procedures in an other email in this thread. Got an opinion about that? > > > > I’m not a big fan of the thread-local storage approach either. It could > > work OK if there was a 1:1 mapping between threads and clients, but > > AFAIK neither Samba nor Ganesha works that way. For anything that > > doesn’t, we’re going to be making these calls *frequently* and they’re > > far from free. Adding extra arguments to each call is a pain[1], but it > > seems preferable in this case. > > > > [1] Yet another reason that a control-block API would have been > > preferable to a purely argument-based API, but I guess that’s water > > under the bridge. > > _______________________________________________ > > Gluster-devel mailing list > > Gluster-devel@xxxxxxxxxxx > > http://www.gluster.org/mailman/listinfo/gluster-devel
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-devel