On Fri, Sep 09, 2016 at 07:38:35AM +1000, Dave Chinner wrote: > It's not an XFS specific problem: any filesystem that supports hole > punch and it's fallocate() friends needs this high level splice IO > exclusion as well. How is hole punch different from truncate()? My reading of the situation is that we don't need exclusion between that and insertion into pipe; only for "gather uptodate page references" part. If some page gets evicted afterwards... how is that different from having that happen right after we'd finished with ->splice_read()? Am I missing something subtle in there? I'm still looking at the O_DIRECT paths in that stuff; we'll probably need iov_iter_get_pages() for these suckers to allocate pages and stick them into slots. The tricky part is to get the semantics of iov_iter_advance() right for them, but it does look feasible. Again, what I propose is a new iov_iter flavour. Backed by pipe_buffer array, used only for reads (i.e. copy to, not copy from). Three states for element: pagecache one, copied data, empty. Semantics: * copy_page_to_iter(): grab a reference to page and stick it into the next element (making it a pagecache one) with offset and len coming directly from arguments. * copy_to_iter(): if the last element is a 'copied data' with empty space remaining - copy to the end. Otherwise allocate a new page and stick it into the next element (making it 'copied data'), then copy into it. If still not all data copied, do the same for the next element, etc. Of course, if there's no elements left, we are done copying. * zero_iter(): ditto, with s/copy/fill with zeroes/ * iov_iter_get_pages(): allocate pages, stick them into the next slots (making those 'copied data'). That might need some changes, though - I'm still looking through the users. The tricky part is decision when to update the lengths. * iov_iter_get_pages_alloc(): not sure, hadn't really looked yet. * iov_iter_alignment(): probably just returns 0. * iov_iter_advance(): probably like bvec variant. _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs