Re: [LSF/MM ATTEND] Online Logical Head Depop and SMR disks chunked writepages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>On 02/22/16 18:56, Damien Le Moal wrote:
>> 2) Write back of dirty pages to SMR block devices:
>>
>> Dirty pages of a block device inode are currently processed using the
>> generic_writepages function, which can be executed simultaneously
>> by multiple contexts (e.g sync, fsync, msync, sync_file_range, etc).
>> Mutual exclusion of the dirty page processing being achieved only at
>> the page level (page lock & page writeback flag), multiple processes
>> executing a "sync" of overlapping block ranges over the same zone of
>> an SMR disk can cause an out-of-LBA-order sequence of write requests
>> being sent to the underlying device. On a host managed SMR disk, where
>> sequential write to disk zones is mandatory, this result in errors and
>> the impossibility for an application using raw sequential disk write
>> accesses to be guaranteed successful completion of its write or fsync
>> requests.
>>
>> Using the zone information attached to the SMR block device queue
>> (introduced by Hannes), calls to the generic_writepages function can
>> be made mutually exclusive on a per zone basis by locking the zones.
>> This guarantees sequential request generation for each zone and avoid
>> write errors without any modification to the generic code implementing
>> generic_writepages.
>>
>> This is but one possible solution for supporting SMR host-managed
>> devices without any major rewrite of page cache management and
>> write-back processing. The opinion of the audience regarding this
>> solution and discussing other potential solutions would be greatly
>> appreciated.
>
>Hello Damien,
>
>Is it sufficient to support filesystems like BTRFS on top of SMR drives 
>or would you also like to see that filesystems like ext4 can use SMR 
>drives ? In the latter case: the behavior of SMR drives differs so 
>significantly from that of other block devices that I'm not sure that we 
>should try to support these directly from infrastructure like the page 
>cache. If we look e.g. at NAND SSDs then we see that the characteristics 
>of NAND do not match what filesystems expect (e.g. large erase blocks). 
>That is why every SSD vendor provides an FTL (Flash Translation Layer), 
>either inside the SSD or as a separate software driver. An FTL 
>implements a so-called LFS (log-structured filesystem). With what I know 
>about SMR this technology looks also suitable for implementation of a 
>LFS. Has it already been considered to implement an LFS driver for SMR 
>drives ? That would make it possible for any filesystem to access an SMR 
>drive as any other block device. I'm not sure of this but maybe it will 
>be possible to share some infrastructure with the LightNVM driver 
>(directory drivers/lightnvm in the Linux kernel tree). This driver 
>namely implements an FTL.

Hello Bart,


Thank you for your comments.

I totally agree with you that trying to support SMR disks by only modifying
the page cache so that unmodified standard file systems like BTRFS or ext4
remain operational is not realistic at best, and more likely simply impossible.
For this kind of use case, as you said, an FTL or a device mapper driver are
much more suitable.

The case I am considering for this discussion is for raw block device accesses
by an application (writes from user space to /dev/sdxx). This is a very likely
use case scenario for high capacity SMR disks with applications like distributed
object stores / key value stores.

In this case, write-back of dirty pages in the block device file inode mapping
is handled in fs/block_dev.c using the generic helper function generic_writepages.
This does not guarantee the generation of the required sequential write pattern
per zone necessary for host-managed disks. As I explained, aligning calls of this
function to zone boundaries while locking the zones under write-back solves
simply the problem (implemented and tested). This is of course only one possible
solution. Pushing modifications deeper in the code or providing a
"generic_sequential_writepages" helper function are other potential solutions
that in my opinion are worth discussing as other types of devices may benefit also
in terms of performance (e.g. regular disk drives prefer sequential writes, and
SSDs as well) and/or lighten the overhead on an underlying FTL or device mapper
driver.

For a file system, an SMR compliant implementation of a file inode mapping
writepages method should be provided by the file system itself as the sequentiality
of the write pattern depends further on the block allocation mechanism of the file
system.

Note that the goal here is not to hide to applications the sequential write
constraint of SMR disks. The page cache itself (the mapping of the block
device inode) remains unchanged. But the modification proposed guarantees that
a well behaved application writing sequentially to zones through the page cache
will see successful sync operations.

Best regards.

------------------------
Damien Le Moal, Ph.D.
Sr. Manager, System Software Group, HGST Research,
HGST, a Western Digital company
Damien.LeMoal@xxxxxxxx
(+81) 0466-98-3593 (ext. 513593)
1 kirihara-cho, Fujisawa, 
Kanagawa, 252-0888 Japan
www.hgst.com
Western Digital Corporation (and its subsidiaries) E-mail Confidentiality Notice & Disclaimer:

This e-mail and any files transmitted with it may contain confidential or legally privileged information of WDC and/or its affiliates, and are intended solely for the use of the individual or entity to which they are addressed. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited. If you have received this e-mail in error, please notify the sender immediately and delete the e-mail in its entirety from your system.
��.n��������+%������w��{.n�����{������ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux