On 3/21/22 9:40 PM, Matthew Wilcox wrote: > On Wed, Mar 16, 2022 at 09:17:04PM +0800, Jeffle Xu wrote: >> +#ifdef CONFIG_CACHEFILES_ONDEMAND >> + struct xarray reqs; /* xarray of pending on-demand requests */ >> + rwlock_t reqs_lock; /* Lock for reqs xarray */ > > Why do you have a separate rwlock when the xarray already has its own > spinlock? This is usually a really bad idea. Hi, Thanks for reviewing. reqs_lock is also used to protect the check of cache->flags. Please refer to patch 4 [1] of this patchset. ``` + /* + * Enqueue the pending request. + * + * Stop enqueuing the request when daemon is dying. So we need to + * 1) check cache state, and 2) enqueue request if cache is alive. + * + * The above two ops need to be atomic as a whole. @reqs_lock is used + * here to ensure that. Otherwise, request may be enqueued after xarray + * has been flushed, in which case the orphan request will never be + * completed and thus netfs will hang there forever. + */ + read_lock(&cache->reqs_lock); + + /* recheck dead state under lock */ + if (test_bit(CACHEFILES_DEAD, &cache->flags)) { + read_unlock(&cache->reqs_lock); + ret = -EIO; + goto out; + } + + xa_lock(xa); + ret = __xa_alloc(xa, &id, req, xa_limit_32b, GFP_KERNEL); + if (!ret) + __xa_set_mark(xa, id, CACHEFILES_REQ_NEW); + xa_unlock(xa); + + read_unlock(&cache->reqs_lock); ``` It's mainly used to protect against the xarray flush. Besides, IMHO read-write lock shall be more performance friendly, since most cases are the read side. [1] https://lkml.org/lkml/2022/3/16/351 -- Thanks, Jeffle