Re: [features/locks] Fetching lock info in lookup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Thu, Jun 21, 2018 at 7:14 AM, Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx> wrote:


On Thu, Jun 21, 2018 at 6:55 AM, Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx> wrote:


On Wed, Jun 20, 2018 at 9:09 PM, Xavi Hernandez <xhernandez@xxxxxxxxxx> wrote:
On Wed, Jun 20, 2018 at 4:29 PM Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx> wrote:
Krutika,

This patch doesn't seem to be getting counts per domain, like number of inodelks or entrylks acquired in a domain "xyz". Am I right? If per domain stats are not available, passing interested domains in xdata_req would be needed. Any suggestions on that?

We have GLUSTERFS_INODELK_DOM_COUNT. Its data should be a domain name for which we want to know the number of inodelks (the count is returned into GLUSTERFS_INODELK_COUNT though).

It only exists for inodelk. If you need it for entrylk, it would need to be implemented.

Yes. Realised that after going through the patch a bit more deeply. Thanks. I'll implement a domain based entrylk count.

I think I need to have a dynamic key for responses. Otherwise its difficult to support requests on multiple domain in the same call. Embedding the domain name in key helps us to keep per domain results separate. Also needed is ways to send multiple domains in requests. If EC/AFR is already using it, there is high chance of overwriting previously set requests for different domains. Currently this is not consumed in lookup path by EC/AFR/Shard (DHT is interested for this information in lookup path) and hence not a pressing problem. But, we cannot rely on that.

what do you think is a better interface among following alternatives?

In request path,

1. Separate keys with domain name embedded - For eg., glusterfs.inodelk.xyz.count. Value is ignored.
2. A single key like GLUSTERFS_INODELK_DOM_COUNT. Value is a string of interested domains separated by a delimiter (which character to use as delimiter?)

In response path,
1. Separate keys with domain name embedded - For eg., glusterfs.inodelk.xyz.count. Value is the total number of locks (granted + blocked).
2. A single key like GLUSTERFS_INODELK_DOM_COUNT. Value is a string of interested domains and lock count separated by a delimiter (which character to use as delimiter?)

I personally prefer the domain name embedded in key approach as it avoids the string parsing by consumers. Any other approaches you can think of?

Only first option gives backward compatibility. So better to go with that. In Ovirt and gluster-block cases new clients can operate at the time old servers are in the cluster.
 

As of now response returned is number of (granted + blocked) locks. For consumers using write-locks granted locks is always 1

This may not be true, if the granted locks are on specified ranges. So depends on the ranges xlator sends inodelks I guess.
 
and hence blocked locks can be inferred. But read-locks consumers this is not possible as there can be more than one read-lock consumers. For the requirement in DHT, we don't need the exact number. Instead we need the information about are there any granted locks, which can be given by the existing implementation. So, I am not changing that.

Try and change the existing macros/functions to achieve this so that all fops get this functionality...
 




Xavi


regards,
Raghavendra

On Wed, Jun 20, 2018 at 12:58 PM, Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx> wrote:


On Wed, Jun 20, 2018 at 12:06 PM, Krutika Dhananjay <kdhananj@xxxxxxxxxx> wrote:
We do already have a way to get inodelk and entrylk count from a bunch of fops, introduced in http://review.gluster.org/10880.
Can you check if you can make use of this feature?

Thanks Krutika. Yes, this feature meets DHT's requirement. We might need a GLUSTERFS_PARENT_INODELK, but that can be easily added along the lines of other counts. If necessary I'll send a patch to implement GLUSTERFS_PARENT_INODELK.


-Krutika


On Wed, Jun 20, 2018 at 9:17 AM, Amar Tumballi <atumball@xxxxxxxxxx> wrote:


On Wed, Jun 20, 2018 at 9:06 AM, Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx> wrote:
All,

We've a requirement in DHT [1] to query the number of locks granted on an inode in lookup fop. I am planning to use xdata_req in lookup to pass down the relevant arguments for this query. I am proposing following signature:

In lookup request path following key value pairs will be passed in xdata_req:
* "glusterfs.lock.type"
    - values can be "glusterfs.posix", "glusterfs.inodelk", "glusterfs.entrylk"
* If the value of "glusterfs.lock.type" is "glusterfs.entrylk", then basename is passed as a value in xdata_req for key "glusterfs.entrylk.basename"
* key "glusterfs.lock-on?" will differentiate whether the lock information is on current inode ("glusterfs.current-inode") or parent-inode ("glusterfs.parent-inode"). For a nameless lookup "glusterfs.parent-inode" is invalid.
* "glusterfs.blocked-locks" - Information should be limited to blocked locks.
* "glusterfs.granted-locks" - Information should be limited to granted locks.
* If necessary other information about granted locks, blocked locks can be added. Since, there is no requirement for now, I am not adding these keys.

Response dictionary will have information in following format:
* "glusterfs.entrylk.<gfid>.<basename>.granted-locks" - number of granted entrylks on inode "gfid" with "basename" (usually this value will be either 0 or 1 unless we introduce read/write lock semantics).
* "glusterfs.inodelk.<gfid>.granted-locks" - number of granted inodelks on "basename"

Thoughts?


I personally feel, it is good to get as much information possible in lookup, so it helps to take some high level decisions better, in all translators. So, very broad answer would be to say go for it. The main reason the xdata is provided in all fops is to do these extra information fetching/overloading anyways.

As you have clearly documented the need, it makes it even better to review and document it with commit. So, all for it.

Regards,
Amar
 

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel






_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel



--
Pranith
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-devel

[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux