On 01/15/2015 10:53 PM, Xavier Hernandez wrote:
Hi,
currently eager locking is implemented by checking the open-fd-count
special xattr for each write. If there's more than one open on the
same file, eager locking is disabled to avoid starvation.
This works quite well for file writes, but makes eager locking
unusable for other request types that do not involve an open fd (in
fact, this method is only for writes on regular files, not reads or
directories). This may cause a performance problem for other
operations, like metadata.
To be able to use eager locking for other purposes, what do you think
about this proposal:
Instead of implementing open-fd-count on posix xlator, do something
similar but in locks xlator. The difference will be that locks xlator
can use the pending locking information to determine if there are
other processes waiting for a resource. If so, set a flag in the cbk
xdata to let high level xlators know that they should not use eager
locking (this can be done only upon request by xdata).
I think this way provides a more precise way to avoid starvation and
maximize performance at the same time, and it can be used for any
request even if it doesn't depend on an fd.
Another advantage is that if one file has been opened multiple times
but all of them from the same glusterfs client, that client could use
a single inodelk to manage all the accesses, not needing to release
the lock. Current implementation in posix xlator cannot differentiate
from opens from the same client or different clients.
What do you think ?
I like the idea. So basically we can propagate list_empty information of
'blocking_locks' list. And for sending locks, we need to use lk-owner
based on gfid so that locks from same client i.e. lkowner+transport are
granted irrespective of conflicting locks. The respective xls need to
make sure to order the fops so that they don't step on each other in a
single process. This can be used for entry-locks also.
Pranith
Xavi
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-devel