Re: [PATCH] null_blk: Fix scheduling in atomic with zoned mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020/11/06 22:55, Christoph Hellwig wrote:
> On Fri, Nov 06, 2020 at 08:01:41PM +0900, Damien Le Moal wrote:
>> Commit aa1c09cb65e2 ("null_blk: Fix locking in zoned mode") changed
>> zone locking to using the potentially sleeping wait_on_bit_io()
>> function. This is acceptable when memory backing is enabled as the
>> device queue is in that case marked as blocking, but this triggers a
>> scheduling while in atomic context with memory backing disabled.
>>
>> Fix this by relying solely on the device zone spinlock for zone
>> information protection without temporarily releasing this lock around
>> null_process_cmd() execution in null_zone_write(). This is OK to do
>> since when memory backing is disabled, command processing does not
>> block and the memory backing lock nullb->lock is unused. This solution
>> avoids the overhead of having to mark a zoned null_blk device queue as
>> blocking when memory backing is unused.
>>
>> This patch also adds comments to the zone locking code to explain the
>> unusual locking scheme.
>>
>> Reported-by: kernel test robot <lkp@xxxxxxxxx>
>> Fixes: aa1c09cb65e2 ("null_blk: Fix locking in zoned mode")
>> Cc: stable@xxxxxxxxxxxxxxx
>> Signed-off-by: Damien Le Moal <damien.lemoal@xxxxxxx>
>> ---
>>  drivers/block/null_blk.h       |  2 +-
>>  drivers/block/null_blk_zoned.c | 47 ++++++++++++++++++++++------------
>>  2 files changed, 32 insertions(+), 17 deletions(-)
>>
>> diff --git a/drivers/block/null_blk.h b/drivers/block/null_blk.h
>> index cfd00ad40355..c24d9b5ad81a 100644
>> --- a/drivers/block/null_blk.h
>> +++ b/drivers/block/null_blk.h
>> @@ -47,7 +47,7 @@ struct nullb_device {
>>  	unsigned int nr_zones_closed;
>>  	struct blk_zone *zones;
>>  	sector_t zone_size_sects;
>> -	spinlock_t zone_dev_lock;
>> +	spinlock_t zone_lock;
>>  	unsigned long *zone_locks;
>>  
>>  	unsigned long size; /* device size in MB */
>> diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c
>> index 8775acbb4f8f..beb34b4f76b0 100644
>> --- a/drivers/block/null_blk_zoned.c
>> +++ b/drivers/block/null_blk_zoned.c
>> @@ -46,11 +46,20 @@ int null_init_zoned_dev(struct nullb_device *dev, struct request_queue *q)
>>  	if (!dev->zones)
>>  		return -ENOMEM;
>>  
>> -	spin_lock_init(&dev->zone_dev_lock);
>> -	dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL);
>> -	if (!dev->zone_locks) {
>> -		kvfree(dev->zones);
>> -		return -ENOMEM;
>> +	/*
>> +	 * With memory backing, the zone_lock spinlock needs to be temporarily
>> +	 * released to avoid scheduling in atomic context. To guarantee zone
>> +	 * information protection, use a bitmap to lock zones with
>> +	 * wait_on_bit_lock_io(). Sleeping on the lock is OK as memory backing
>> +	 * implies that the queue is marked with BLK_MQ_F_BLOCKING.
>> +	 */
>> +	spin_lock_init(&dev->zone_lock);
>> +	if (dev->memory_backed) {
>> +		dev->zone_locks = bitmap_zalloc(dev->nr_zones, GFP_KERNEL);
>> +		if (!dev->zone_locks) {
>> +			kvfree(dev->zones);
>> +			return -ENOMEM;
>> +		}
>>  	}
>>  
>>  	if (dev->zone_nr_conv >= dev->nr_zones) {
>> @@ -137,12 +146,17 @@ void null_free_zoned_dev(struct nullb_device *dev)
>>  
>>  static inline void null_lock_zone(struct nullb_device *dev, unsigned int zno)
>>  {
>> -	wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE);
>> +	if (dev->memory_backed)
>> +		wait_on_bit_lock_io(dev->zone_locks, zno, TASK_UNINTERRUPTIBLE);
>> +	spin_lock_irq(&dev->zone_lock);
>>  }
>>  
>>  static inline void null_unlock_zone(struct nullb_device *dev, unsigned int zno)
>>  {
>> -	clear_and_wake_up_bit(zno, dev->zone_locks);
>> +	spin_unlock_irq(&dev->zone_lock);
>> +
>> +	if (dev->memory_backed)
>> +		clear_and_wake_up_bit(zno, dev->zone_locks);
>>  }
>>  
>>  int null_report_zones(struct gendisk *disk, sector_t sector,
>> @@ -322,7 +336,6 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
>>  		return null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
>>  
>>  	null_lock_zone(dev, zno);
>> -	spin_lock(&dev->zone_dev_lock);
>>  
>>  	switch (zone->cond) {
>>  	case BLK_ZONE_COND_FULL:
>> @@ -375,9 +388,17 @@ static blk_status_t null_zone_write(struct nullb_cmd *cmd, sector_t sector,
>>  	if (zone->cond != BLK_ZONE_COND_EXP_OPEN)
>>  		zone->cond = BLK_ZONE_COND_IMP_OPEN;
>>  
>> -	spin_unlock(&dev->zone_dev_lock);
>> +	/*
>> +	 * Memory backing allocation may sleep: release the zone_lock spinlock
>> +	 * to avoid scheduling in atomic context. Zone operation atomicity is
>> +	 * still guaranteed through the zone_locks bitmap.
>> +	 */
>> +	if (dev->memory_backed)
>> +		spin_unlock_irq(&dev->zone_lock);
>>  	ret = null_process_cmd(cmd, REQ_OP_WRITE, sector, nr_sectors);
>> -	spin_lock(&dev->zone_dev_lock);
>> +	if (dev->memory_backed)
>> +		spin_lock_irq(&dev->zone_lock);
> 
> Do we actually need to take zone_lock at all for the memory_backed
> case?  Should the !memory_backed just use a per-zone spinlock?

For your first question, yes we do: the per zone wait_on_bit_io() lock serialize
zone operations per zone but we need a device level zone lock for protecting the
counters (imp open, exp open and closed) for zone resources management.

For the !memory backed case, I wondered the same but decided to just reuse the
device zone spinlock as the single lock for all zones. Not sure if a per zone
spinlock would improve performance by that much given that the lock time frame
becomes really tiny without the memory copy. I will try that to see how
performance changes.

That said, I would prefer to add such change as a different patch for next cycle
and keep this patch as a bug fix though.


-- 
Damien Le Moal
Western Digital Research




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux