Hi,
On 08/07/2020 17:54, Christoph Hellwig wrote:
On Wed, Jul 08, 2020 at 02:54:37PM +0100, Matthew Wilcox wrote:
Direct I/O isn't deterministic though. If the file isn't shared, then
it works great, but as soon as you get mixed buffered and direct I/O,
everything is already terrible. Direct I/Os perform pagecache lookups
already, but instead of using the data that we found in the cache, we
(if it's dirty) write it back, wait for the write to complete, remove
the page from the pagecache and then perform another I/O to get the data
that we just wrote out! And then the app that's using buffered I/O has
to read it back in again.
Mostly agreed. That being said I suspect invalidating clean cache
might still be a good idea. The original idea was mostly on how
to deal with invalidation failures of any kind, but falling back for
any kind of dirty cache also makes at least some sense.
I have had an objection raised off-list. In a scenario with a block
device shared between two systems and an application which does direct
I/O, everything is normally fine. If one of the systems uses tar to
back up the contents of the block device then the application on that
system will no longer see the writes from the other system because
there's nothing to invalidate the pagecache on the first system.
Err, WTF? If someone access shared block devices with random
applications all bets are off anyway.
On GFS2 the locking should take care of that. Not 100% sure about OCFS2
without looking, but I'm fairly sure that they have a similar
arrangement. So this shouldn't be a problem unless there is an
additional cluster fs that I'm not aware of that they are using in this
case. It would be good to confirm which fs they are using,
Steve.