On Thu, Sep 8, 2016 at 6:53 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote: > > So if we race iwth a truncate, the pages in spd.pages[] that are > beyond the new EOF may or may not have been removed from the page > cache. So I'm not sure why we'd need to care? The thing is, if the splicer and the hole puncher aren't synchronized, then there is by definition no "before/after" point. The splice data may be "stale" in the sense that we look at the page after the hole punch has happened and the page no longer has a ->mapping associated with it, but it is equally valid to treat that as just a case of "the read happened before the hole punch". Put another way: it's not wrong to use the ostensibly "stale" data, it just means that the splice acts as if the IO had happened before the data became stale. The whole point of "splice" is for the pipe to act as a in-kernel buffer. So a splice does not *synchronize* the two end-points, quite the reverse: it is meant to act as a "read + write" with the pipe itself being the buffer in between (and because it's a in-kernel buffer rather than a user space buffer like a real read()+write() pair would be, it means that we then *can* do things like zero-copy, but realistically it really aims for "one-copy" rather than "two-copy". So if the splice buffer contains stale values, then that's exactly similar to a user space application having done a "read()" of old data, then the file is truncated (or hole punched), and then the application does a "write()" on that data. The target clearly sees *different* data than is on the filesystem at that point, but since "complete synchronization" has never been a guarantee of splice() in the first place, that's just not a downside. If an application expects to have "splice()" give some kind of data consistency guarantees wrt people writing to the file (or with truncate or hole punching), then the application would have to implement that serialization itself. Splice in itself does not do serialization, it does data copying. Linus _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs