Re: [PATCH] block: Disable write plugging for zoned block devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/10/19 12:10 PM, Ming Lei wrote:
> On Tue, Jul 09, 2019 at 02:47:12PM +0000, Damien Le Moal wrote:
>> Hi Ming,
>>
>> On 2019/07/09 23:29, Ming Lei wrote:
>>> On Tue, Jul 09, 2019 at 06:02:19PM +0900, Damien Le Moal wrote:
>>>> Simultaneously writing to a sequential zone of a zoned block device
>>>> from multiple contexts requires mutual exclusion for BIO issuing to
>>>> ensure that writes happen sequentially. However, even for a well
>>>> behaved user correctly implementing such synchronization, BIO plugging
>>>> may interfere and result in BIOs from the different contextx to be
>>>> reordered if plugging is done outside of the mutual exclusion section,
>>>> e.g. the plug was started by a function higher in the call chain than
>>>> the function issuing BIOs.
>>>>
>>>>       Context A                           Context B
>>>>
>>>>    | blk_start_plug()
>>>>    | ...
>>>>    | seq_write_zone()
>>>>      | mutex_lock(zone)
>>>>      | submit_bio(bio-0)
>>>>      | submit_bio(bio-1)
>>>>      | mutex_unlock(zone)
>>>>      | return
>>>>    | ------------------------------> | seq_write_zone()
>>>>   				       | mutex_lock(zone)
>>>> 				       | submit_bio(bio-2)
>>>> 				       | mutex_unlock(zone)
>>>>    | <------------------------------ |
>>>>    | blk_finish_plug()
>>>>
>>>> In the above example, despite the mutex synchronization resulting in the
>>>> correct BIO issuing order 0, 1, 2, context A BIOs 0 and 1 end up being
>>>> issued after BIO 2 when the plug is released with blk_finish_plug().
>>>
>>> I am wondering how you guarantee that context B is always run after
>>> context A.
>>
>> My example was a little too oversimplified. Think of a file system allocating
>> blocks sequentially and issuing page I/Os for the allocated blocks sequentially.
>> The typical sequence is:
>>
>> mutex_lock(zone)
>> alloc_block_extent(zone)
>> for all blocks in the extent
>> 	submit_bio()
>> mutex_unlock(zone)
>>
>> This way, it does not matter which context gets the lock first, all write BIOs
>> for the zone remain sequential. The problem with plugs as explained above is
> 
> But wrt. the example in the commit log, it does matter which context gets the lock
> first, and it implies that context A has to run seq_write_zone() first,
> because you mentioned bio-2 has to be issued after bio-0 and bio-1.
> 
> If there is 3rd context which is holding the lock, then either context A or
> context B can win in getting the lock first. So looks the zone lock itself
> isn't enough for maintaining the IO order. But that may not be related
> with this patch.

For a raw block device driver, the zone lock is enough to maintain
sequential write sequence. This is not visible in my example, because it
is too simplistic. My apologies for the confusion.

The reason is that the target sector of any zone write BIO must always
be set to the end sector of the last issued write BIO for the zone. A
more detailed and correct typical sequence for writing to a zone for a
raw block device driver (e.g. a dm target) is:

seq_write_zone() {

	mutex_lock(zone)

	/* bio-0 */
	bio = bio_alloc()
	bio->bi_iter.bi_sector = zone->wp
	zone->wp += bio_sectors(bio)
	submit_bio(bio)

	/* bio-1 */
	bio = bio_alloc()
	bio->bi_iter.bi_sector = zone->wp
	zone->wp += bio_sectors(bio)
	submit_bio(bio)

	...

	mutex_unlock(zone)
}

Doing so, multiple contexts serialized with the zone mutex can keep
writing sequentially, no matter the number of BIOs they issue and no
matter the order in which they grab the zone lock. Note that here, the
zone write pointer is a "soft" write pointer, not the actual device
managed write pointer, because this latter WP is updated only on
completion of the write commands, so visible to the host only on
completion of the write BIOs. The "soft" write pointer is thus always
equal to or in advance of the device hard WP. The soft WP must be
re-synced to the hard WP in case of failed writes.

For a file system, the zone hard WP is used as a starting point for
block allocation. BIO issuing can then simply use the allocated extent
sector directly instead of the zone soft write pointer. The block
allocation code will manage the zone soft WP and do the resync with the
device hard WP in case of write error.

> Also seems there is issue with REQ_NOWAIT for zone support, for example,
> context A may see out-of-request and return earlier, however context B
> may get request and move on.

Yes, but context B will move on from the last successfully written
sector so sequential writes can still go on. It is the responsibility of
the user code to deal with failed writes and how to recover from them.

If REQ_NOWAIT is used for a BIO and causes submit_bio() to fail (
BLK_QC_T_NONE returned) in one context, that context may retry until it
succeeds and increment the soft WP or bail out without incrementing the
zone soft WP. In both cases, other contexts may simply resume trying to
write from the still valid soft WP. Any number of methods exist for
dealing with this. All the responsibility of the user (fs or dm) because
sequential write issuing must in the first place be guaranteed by the
users. In this regard, the generic block layer is fine.

>> that if the plug start/finish is not within the zone lock, reordering can happen
>> for the 2 sequences of BIOs issued by the 2 contexts.
>>
>> We hit this problem with btrfs writepages (page writeback) where plugging is
>> done before the above sequence execution, in the caller function of the page
>> writeback processing, resulting in unaligned write errors.
>>
>>>> To fix this problem, introduce the internal helper function
>>>> blk_mq_plug() to access the current context plug, return the current
>>>> plug only if the target device is not a zoned block device or if the
>>>> BIO to be plugged not a write operation. Otherwise, ignore the plug and
>>>> return NULL, resulting is all writes to zoned block device to never be
>>>> plugged.
>>>
>>> Another candidate approach is to run the following code before
>>> releasing 'zone' lock:
>>>
>>> 	if (current->plug)
>>> 		blk_finish_plug(context->plug)
>>>
>>> Then we can fix zone specific issue in zone code only, and avoid generic
>>> blk-core change for zone issue.
>>
>> Yes indeed, that would work too. But this patch is precisely to avoid having to
>> add such code and simplify implementing support for zoned block device in
>> existing code. Furthermore, plugging for writes to sequential zones has no real
>> value because mq-deadline will dispatch at most one write per zone. So writes
>> for a single zone tend to accumulate in the scheduler queue, and that creates
>> plenty of opportunities for merging small sequential writes (e.g. file system
>> page BIOs).
>>
>> If you think this patch is really not appropriate, we can still address the
>> problem case by case in the support we add for zoned devices. But again, a
>> generic solution makes things simpler I think.
> 
> OK, then I am fine with this simple generic approach.

Thanks.

Best regards.

-- 
Damien Le Moal
Western Digital Research




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux