Re: [PATCH v2 01/16] FS: Added demand paging markers to filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mon, May 7, 2012 at 5:01 AM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> On Thu, May 03, 2012 at 07:53:00PM +0530, Venkatraman S wrote:
>> From: Ilan Smith <ilan.smith@xxxxxxxxxxx>
>>
>> Add attribute to identify demand paging requests.
>> Mark readpages with demand paging attribute.
>>
>> Signed-off-by: Ilan Smith <ilan.smith@xxxxxxxxxxx>
>> Signed-off-by: Alex Lemberg <alex.lemberg@xxxxxxxxxxx>
>> Signed-off-by: Venkatraman S <svenkatr@xxxxxx>
>> ---
>>  fs/mpage.c                |    2 ++
>>  include/linux/bio.h       |    7 +++++++
>>  include/linux/blk_types.h |    2 ++
>>  3 files changed, 11 insertions(+)
>>
>> diff --git a/fs/mpage.c b/fs/mpage.c
>> index 0face1c..8b144f5 100644
>> --- a/fs/mpage.c
>> +++ b/fs/mpage.c
>> @@ -386,6 +386,8 @@ mpage_readpages(struct address_space *mapping, struct list_head *pages,
>>                                       &last_block_in_bio, &map_bh,
>>                                       &first_logical_block,
>>                                       get_block);
>> +                     if (bio)
>> +                             bio->bi_rw |= REQ_RW_DMPG;
>
> Have you thought about the potential for DOSing a machine
> with this? That is, user data reads can now preempt writes of any
> kind, effectively stalling writeback and memory reclaim which will
> lead to OOM situations. Or, alternatively, journal flushing will get
> stalled and no new modifications can take place until the read
> stream stops.

This feature doesn't fiddle with the I/O scheduler's ability to balance
read vs write requests or handling requests from various process queues (CFQ).

Also, for block devices which don't implement the ability to preempt (and even
for older versions of MMC devices which don't implement this feature),
the behaviour
falls back to waiting for write requests to complete before issuing the read.

In low end flash devices, some requests might take too long than normal
due to background device maintenance (i.e flash erase / reclaim procedure)
kicking in in the context of an ongoing write, stalling them by several
orders of magnitude.

This implementation (See 14/16) does have several
checks and timers to see that it's not triggered very often.
In my tests, where I usually have a generous preemption time window, the abort
happens < 0.1% of the time.


>
> This really seems like functionality that belongs in an IO
> scheduler so that write starvation can be avoided, not in high-level
> data read paths where we have no clue about anything else going on
> in the IO subsystem....

Indeed, the feature is built mostly in the low level device driver and
minor changes in the elevator. Changes above the block layer are only
about setting
attributes and transparent to their operation.

>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux