Re: [PATCH 6/6] fuse: Do not take fuse_conn::lock on fuse_request_send_background()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Miklos,

should I resend the series with the patch you changed,
or you are already taken it since there is your SoB?

Thanks,
Kirill

On 26.09.2018 18:18, Kirill Tkhai wrote:
> On 26.09.2018 15:25, Miklos Szeredi wrote:
>> On Mon, Aug 27, 2018 at 06:29:56PM +0300, Kirill Tkhai wrote:
>>> Currently, we take fc->lock there only to check for fc->connected.
>>> But this flag is changed only on connection abort, which is very
>>> rare operation. Good thing looks to make fuse_request_send_background()
>>> faster, while fuse_abort_conn() slowler.
>>>
>>> So, we make fuse_request_send_background() lockless and mark
>>> (fc->connected == 1) region as RCU-protected. Abort function
>>> just uses synchronize_sched() to wait till all pending background
>>> requests is being queued, and then makes ordinary abort.
>>>
>>> Note, that synchronize_sched() is used instead of synchronize_rcu(),
>>> since we want to check for fc->connected without rcu_dereference()
>>> in fuse_request_send_background() (i.e., not to add memory barriers
>>> to this hot path).
>>
>> Apart from the inaccuracies in the above (_sched variant is for scheduling and
>> NMI taking code; _sched variant requires rcu_dereference() as well;
>> rcu_dereference() does not add barriers; rcu_dereference() is only for pointers,
>> so we can't use it for an integer),
> 
> Writing this I was inspired by expand_fdtable(). Yes, the description confuses,
> and we don't need rcu_dereference() since we do not touch memory pointed by __rcu
> pointer, we have no pointer at all. synchronize_sched() guarantees:
> 
>   On systems with more than one CPU, when synchronize_sched() returns,
>   each CPU is guaranteed to have executed a full memory barrier since the
>   end of its last RCU-sched read-side critical section whose beginning
>   preceded the call to synchronize_sched().
> 
> (and rcu_dereference() unfolds in smp_read_barrier_depends(), which I mean as
>  added barriers)
> 
> But it does not so matter. I'm OK with the patch you updated.
> 
>> wouldn't it be simpler to just use bg_lock
>> for checking ->connected, and lock bg_lock (as well as fc->lock) when setting
>> ->connected?
>>
>> Updated patch below (untested).
> 
> Tested it. Works for me.
> 
> Thanks,
> Kirill
> 
>>
>> ---
>> Subject: fuse: do not take fc->lock in fuse_request_send_background()
>> From: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx>
>> Date: Mon, 27 Aug 2018 18:29:56 +0300
>>
>> Currently, we take fc->lock there only to check for fc->connected.
>> But this flag is changed only on connection abort, which is very
>> rare operation.
>>
>> Signed-off-by: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx>
>> Signed-off-by: Miklos Szeredi <mszeredi@xxxxxxxxxx>
>> ---
>>  fs/fuse/dev.c    |   46 +++++++++++++++++++++++-----------------------
>>  fs/fuse/file.c   |    4 +++-
>>  fs/fuse/fuse_i.h |    4 +---
>>  3 files changed, 27 insertions(+), 27 deletions(-)
>>
>> --- a/fs/fuse/dev.c
>> +++ b/fs/fuse/dev.c
>> @@ -574,42 +574,38 @@ ssize_t fuse_simple_request(struct fuse_
>>  	return ret;
>>  }
>>  
>> -/*
>> - * Called under fc->lock
>> - *
>> - * fc->connected must have been checked previously
>> - */
>> -void fuse_request_send_background_nocheck(struct fuse_conn *fc,
>> -					  struct fuse_req *req)
>> +bool fuse_request_queue_background(struct fuse_conn *fc, struct fuse_req *req)
>>  {
>> -	BUG_ON(!test_bit(FR_BACKGROUND, &req->flags));
>> +	bool queued = false;
>> +
>> +	WARN_ON(!test_bit(FR_BACKGROUND, &req->flags));
>>  	if (!test_bit(FR_WAITING, &req->flags)) {
>>  		__set_bit(FR_WAITING, &req->flags);
>>  		atomic_inc(&fc->num_waiting);
>>  	}
>>  	__set_bit(FR_ISREPLY, &req->flags);
>>  	spin_lock(&fc->bg_lock);
>> -	fc->num_background++;
>> -	if (fc->num_background == fc->max_background)
>> -		fc->blocked = 1;
>> -	if (fc->num_background == fc->congestion_threshold && fc->sb) {
>> -		set_bdi_congested(fc->sb->s_bdi, BLK_RW_SYNC);
>> -		set_bdi_congested(fc->sb->s_bdi, BLK_RW_ASYNC);
>> +	if (likely(fc->connected)) {
>> +		fc->num_background++;
>> +		if (fc->num_background == fc->max_background)
>> +			fc->blocked = 1;
>> +		if (fc->num_background == fc->congestion_threshold && fc->sb) {
>> +			set_bdi_congested(fc->sb->s_bdi, BLK_RW_SYNC);
>> +			set_bdi_congested(fc->sb->s_bdi, BLK_RW_ASYNC);
>> +		}
>> +		list_add_tail(&req->list, &fc->bg_queue);
>> +		flush_bg_queue(fc);
>> +		queued = true;
>>  	}
>> -	list_add_tail(&req->list, &fc->bg_queue);
>> -	flush_bg_queue(fc);
>>  	spin_unlock(&fc->bg_lock);
>> +
>> +	return queued;
>>  }
>>  
>>  void fuse_request_send_background(struct fuse_conn *fc, struct fuse_req *req)
>>  {
>> -	BUG_ON(!req->end);
>> -	spin_lock(&fc->lock);
>> -	if (fc->connected) {
>> -		fuse_request_send_background_nocheck(fc, req);
>> -		spin_unlock(&fc->lock);
>> -	} else {
>> -		spin_unlock(&fc->lock);
>> +	WARN_ON(!req->end);
>> +	if (!fuse_request_queue_background(fc, req)) {
>>  		req->out.h.error = -ENOTCONN;
>>  		req->end(fc, req);
>>  		fuse_put_request(fc, req);
>> @@ -2112,7 +2108,11 @@ void fuse_abort_conn(struct fuse_conn *f
>>  		struct fuse_req *req, *next;
>>  		LIST_HEAD(to_end);
>>  
>> +		/* Background queuing checks fc->connected under bg_lock */
>> +		spin_lock(&fc->bg_lock);
>>  		fc->connected = 0;
>> +		spin_unlock(&fc->bg_lock);
>> +
>>  		fc->aborted = is_abort;
>>  		fuse_set_initialized(fc);
>>  		list_for_each_entry(fud, &fc->devices, entry) {
>> --- a/fs/fuse/fuse_i.h
>> +++ b/fs/fuse/fuse_i.h
>> @@ -863,9 +863,7 @@ ssize_t fuse_simple_request(struct fuse_
>>   * Send a request in the background
>>   */
>>  void fuse_request_send_background(struct fuse_conn *fc, struct fuse_req *req);
>> -
>> -void fuse_request_send_background_nocheck(struct fuse_conn *fc,
>> -					  struct fuse_req *req);
>> +bool fuse_request_queue_background(struct fuse_conn *fc, struct fuse_req *req);
>>  
>>  /* Abort all requests */
>>  void fuse_abort_conn(struct fuse_conn *fc, bool is_abort);
>> --- a/fs/fuse/file.c
>> +++ b/fs/fuse/file.c
>> @@ -1487,6 +1487,7 @@ __acquires(fc->lock)
>>  	struct fuse_inode *fi = get_fuse_inode(req->inode);
>>  	struct fuse_write_in *inarg = &req->misc.write.in;
>>  	__u64 data_size = req->num_pages * PAGE_SIZE;
>> +	bool queued;
>>  
>>  	if (!fc->connected)
>>  		goto out_free;
>> @@ -1502,7 +1503,8 @@ __acquires(fc->lock)
>>  
>>  	req->in.args[1].size = inarg->size;
>>  	fi->writectr++;
>> -	fuse_request_send_background_nocheck(fc, req);
>> +	queued = fuse_request_queue_background(fc, req);
>> +	WARN_ON(!queued);
>>  	return;
>>  
>>   out_free:
>>



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux