> > But the question remains, what is so special about shmem that > > your use case requires fsnotify events to handle ENOSPC? > > > > Many systems are deployed on thin provisioned storage these days > > and monitoring the state of the storage to alert administrator before > > storage gets full (be it filesystem inodes or blocks or thinp space) > > is crucial to many systems. > > > > Since the ENOSPC event that you are proposing is asynchronous > > anyway, what is the problem with polling statfs() and meminfo? > > Amir, > > I spoke a bit with Khazhy (in CC) about the problems with polling the > existing APIs, like statfs. He has been using a previous version of > this code in production to monitor machines for a while now. Khazhy, > feel free to pitch in with more details. > > Firstly, I don't want to treat shmem as a special case. The original > patch implemented support only for tmpfs, because it was a fs specific > solution, but I think this would be useful for any other (non-pseudo) > file system in the kernel. > > The use case is similar to the use case I brought up for FAN_FS_ERROR. > A sysadmin monitoring a fleet of machines wants to be notified when a > service failed because of lack of space, without having to trust the > failed application to properly report the error. > > Polling statfs is prone to missing the ENOSPC occurrence if the error is > ephemeral from a monitoring tool point of view. Say the application is > writing a large file, hits ENOSPC and, as a recovery mechanism, removes > the partial file. If that happens, a daemon might miss the chance to > observe the lack of space in statfs. Doing it through fsnotify, on the > other hand, always catches the condition and allows a monitoring > tool/sysadmin to take corrective action. > > > I guess one difference is that it is harder to predict page allocation failure > > that causes ENOSPC in shmem, but IIUC, your patch does not report > > an fsevent in that case only in inode/block accounting error. > > Or maybe I did not understand it correctly? > > Correct. But we cannot predict the enospc, unless we know the > application. I'm looking for a way for a sysadmin to not have to rely > on the application caring about the file system size. > In the real world, ENOSPC can often be anticipated way ahead of time and sysadmins are practically required to take action when storage space is low. Getting near 90% full filesystem is not healthy on many traditional disk filesystems and causes suboptimal performance and in many cases (especially cow filesystems) may lead to filesystem corruption. All that said, yes, *sometimes* ENOSPC cannot be anticipated, but EIO can never be anticipated, so why are we talking about ENOSPC? Focusing on ENOSPC seems too specific for the purpose of adding fsnotify monitoring for filesystem ephemeral errors. The problem with fsnotify events for ephemeral filesystem errors and that there can be a *lot* of them compared to filesystem corruption errors that would usually put the filesystem in an "emergency" state and stop the events from flooding. For that reason I still think that a polling API for fs ephemeral errors is a better idea. w.r.t to ephemeral errors on writeback we already have syncfs() as a means to provide publish/subscribe API for monitoring applications, to check if there was any error since last check, but we do not have an API that provides this information without the added costs of performing syncfs(). IMO, a proper solution would look something like this: /* per-sb errseq_t for reporting writeback errors via syncfs */ errseq_t s_wb_err; + /* per-sb errseq_t for reporting vfs errors via fstatfs */ + errseq_t s_vfs_err; fstatfs() is just an example that may be a good fit for fs monitoring applications we can add the error state in f_spare space, but we can also create a dedicated API for polling for errors. Same API can be used to poll for wb errors without issuing syncfs(). Thanks, Amir.