While I don't use glfs_creat - I was using it as a reference to some degree - To me it looks wrong ... on two fronts - 1. a lookup is generated for most likely something that may just be a non-existent file to begin with. 2. an open/create is attempted based on this lookup value - could have more race conditions, an unlink elsewhere between lookup and open. Instead it should probably look like the following: 1. Lookup only to see if entry is cached - no forced lookup over the network. 2. If unavailable in local cache, generate a create - let the storage xlator resolve the actual presence/absence. 3. if create returns EEXIST then api can do an open if it really wishes. On Mon, Dec 1, 2014 at 10:43 AM, Shyam <srangana@xxxxxxxxxx> wrote: > On 12/01/2014 07:25 AM, Rudra Siva wrote: >> >> The optimizations look good, I did find out that DHT is where the >> information is for the specific volume so that is working well for me. >> >> I'm testing for a 3 brick distributed volume at it gives the correct >> sub volume to batch up the writes. >> >> What gfapi is doing looks to be a little iffy to me (eg. glfs_creat) - >> based on resolution does open/create - consider the case that 2 >> clients attempt to do a glfs_creat for the same file - the resolve may >> indicate failure to both and they both attempt creation. > > > and, one of them creates and the other does not, at this point it matters > what the _flags_ for create were, if O_EXCL then the one that did not create > would error out at glfs_creat and application request is satisfied (as per > its flags sent), if not then both get an open file handle based on the > flags. > > Does this address the concern, or do you further see some issues? > > >> >> specifically the code in glfs_creat based on resolve: >> >> if (ret == 0) { >> ret = syncop_open (subvol, &loc, flags, glfd->fd); >> DECODE_SYNCOP_ERR (ret); >> } else { >> ret = syncop_create (subvol, &loc, flags, mode, glfd->fd, >> xattr_req, &iatt); >> DECODE_SYNCOP_ERR (ret); >> } >> > > Shyam -- -Siva _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-devel