Re: [Lsf-pc] [LSF/MM TOPIC] I/O error handling and fsync()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed 11-01-17 00:03:56, Ted Tso wrote:
> A couple of thoughts.
> 
> First of all, one of the reasons why this probably hasn't been
> addressed for so long is because programs who really care about issues
> like this tend to use Direct I/O, and don't use the page cache at all.
> And perhaps this is an option open to qemu as well?
> 
> Secondly, one of the reasons why we mark the page clean is because we
> didn't want a failing disk to memory to be trapped with no way of
> releasing the pages.  For example, if a user plugs in a USB
> thumbstick, writes to it, and then rudely yanks it out before all of
> the pages have been writeback, it would be unfortunate if the dirty
> pages can only be released by rebooting the system.
> 
> So an approach that might work is fsync() will keep the pages dirty
> --- but only while the file descriptor is open.  This could either be
> the default behavior, or something that has to be specifically
> requested via fcntl(2).  That way, as soon as the process exits (at
> which point it will be too late for it do anything to save the
> contents of the file) we also release the memory.  And if the process
> gets OOM killed, again, the right thing happens.  But if the process
> wants to take emergency measures to write the file somewhere else, it
> knows that the pages won't get lost until the file gets closed.

Well, as Neil pointed out, the problem is that once the data hits page
cache, we lose the association with a file descriptor. So for example
background writeback or sync(2) can find the dirty data and try to write
it, get EIO, and then you have to do something about it because you don't
know whether fsync(2) is coming or not.

That being said if we'd just keep the pages which failed write out dirty,
the system will eventually block all writers in balance_dirty_pages() and
at that point it is IMO a policy decision (probably per device or per fs)
whether you should just keep things blocked waiting for better times or
whether you just want to start discarding dirty data on a failed write.
Now discarding data that failed to write only when we are close to dirty
limit (or after some timeout or whatever) has a disadvantage that it is
not easy to predict from user POV so I'm not sure if we want to go down
that path. But I can see two options making sense:

1) Just hold onto data and wait indefinitely. Possibly provide a way for
   userspace to forcibly unmount a filesystem in such state.

2) Do what we do now.
 
								Honza
-- 
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]