Re: [PATCH 4/5] block: Add support for bouncing pinned pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon 13-02-23 01:59:28, Christoph Hellwig wrote:
> Eww.  The block bounc code really needs to go away, so a new user
> makes me very unhappy.
> 
> But independent of that I don't think this is enough anyway.  Just
> copying the data out into a new page in the block layer doesn't solve
> the problem that this page needs to be tracked as dirtied for fs
> accounting.  e.g. every time we write this copy it needs space allocated
> for COW file systems.

Right, I forgot about this in my RFC. My original plan was to not clear the
dirty bit in clear_page_dirty_for_io() even for WB_SYNC_ALL writeback when
we do writeback the page and perhaps indicate this in the return value of
clear_page_dirty_for_io() so that the COW filesystem can keep tracking this
page as dirty.

> Which brings me back to if and when we do writeback for pinned page.
> I don't think doing any I/O for short term pins like direct I/O
> make sense.  These pins are defined to be unpinned after I/O
> completes, so we might as well just wait for the unpin instead of doing
> anything complicated.

Agreed. For short term pins we could just wait which should be quite
simple (although there's some DoS potential of this behavior if somebody
runs multiple processes that keep pinning some page with short term pins).

> Long term pins are more troublesome, but I really wonder what the
> defined semantics for data integrity writeback like fsync on them
> is to start with as the content is very much undefined.  Should
> an fsync on a (partially) long term pinned file simplfy fail?  It's
> not like we can win in that scenario.

Well, we have also cases like sync(2) so one would have to be careful with
error propagation and I'm afraid there are enough programs out-there that
treat any error return from fsync(2) as catastrophic so I suspect this
could lead to some surprises. The case I'm most worried about is if some
application sets up RDMA to an mmaped file, runs the transfer and waits for
it to complete, doesn't bother to unpin the pages (keeps them for future
transfers) and calls fsync(2) to make data stable on local storage. That
does seem like quite sensible use and so far it works just fine. And not
writing pages with fsync(2) would break such uses.

								Honza
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux