Re: [RFC] inode table locking contention reduction experiment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Wed, Oct 30, 2019 at 4:32 PM Xavi Hernandez <jahernan@xxxxxxxxxx> wrote:
Hi Changwei,

On Tue, Oct 29, 2019 at 7:56 AM Changwei Ge <chge@xxxxxxxxxxxxxxxxx> wrote:
Hi,

I am recently working on reducing inode_[un]ref() locking contention by
getting rid of inode table lock. Just use inode lock to protect inode
REF. I have already discussed a couple rounds with several Glusterfs
developers via emails and Gerrit and basically get understood on major
logic around.

Currently, inode REF can be ZERO and be reused by increasing it to ONE.
This is IMO why we have to burden so much work for inode table when
REF/UNREF. It makes inode [un]ref() and inode table and dentries(alias)
searching hard to run concurrently.

So my question is in what cases, how can we find a inode whose REF is ZERO?

As Glusterfs store its inode memory address into kernel/fuse, can we
conclude that only fuse_ino_to_inode() can bring back a REF=0 inode?

Xavi's answer below provides some insights. and same time, assuming that only fuse_ino_to_inode() can bring back inode from ref=0 state (for now), is a good start.
 

Yes, when an inode gets refs = 0, it means that gluster code is not using it anywhere, so it cannot be referenced again unless kernel sends new requests on the same inode. Once refs=0 and nlookup=0, the inode can be destroyed.

Inode code is quite complex right now and I haven't had time to investigate this further, but I think we could simplify inode management significantly (specially unref) if we add a reference when nlookup becomes > 0, and remove a reference when nlookup becomes 0 again. Maybe with this approach we could avoid inode table lock in many cases. However we need to make sure we correctly handle invalidation logic to keep inode table size under control.


My suggestion is, don't wait for a complete solution for posting the patch. Let us get a chance to have a look at WorkInProgress patches, so we can have discussions on code itself. It would help to reach better solutions sooner. 

Regards,

Xavi



Thanks,
Changwei
_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel

_______________________________________________

Community Meeting Calendar:

APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https://bluejeans.com/118564314

NA/EMEA Schedule -
Every 1st and 3rd Tuesday at 01:00 PM EDT
Bridge: https://bluejeans.com/118564314

Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-devel


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux