Re: [RFC] Filesystem error notifications proposal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jan Kara <jack@xxxxxxx> writes:
> On Wed 20-01-21 17:13:15, Gabriel Krisman Bertazi wrote:
>>   fanotify is based on fsnotify, and the infrastructure for the
>>   visibility semantics is mostly implemented by fsnotify itself.  It
>>   would be possible to make error notifications a new mechanism on top
>>   of fsnotify, without modifying fanotify, but that would require
>>   exposing a very similar interface to userspace, new system calls, and
>>   that doesn't seem justifiable since we can extend fanotify syscalls
>>   without ABI breakage.
>
> I agree fanotify could be used for propagating the error information from
> kernel to userspace. But before we go to design how exactly the events
> should look like, I'd like to hear a usecase (or usecases) for this
> feature. Because the intended use very much determines which information we
> need to propagate to userspace and what flexibility we need there.

Hi Jan,

Thanks for the review.

My user is a cloud provider who wants to monitor several machines,
potentially with several volumes each, for metadata and data corruption
that might require attention from a sysadmin.  They would run a watcher
for each filesystem of each machine and consolidate the information.
Right now, they have a solution in-house to monitor ext4, which
basically exports to userspace any kind of condition that would trigger
one of the ext4_error_* functions.

We need to monitor a bit more than just writeback errors, but that is a
big part of the functionality we want.  We would be interested in
finding out about inode corruption, checksum failures, and read errors
as well.

Khazhy, in Cc, is more familiar with the specific requirements, and
might be able to provide more information on their usage of the existing
solution.

>> 3 Proposal
>> ==========
>> 
>>   The error notification framework is based on fanotify instead of
>>   watch_queue.  It is exposed through a new set of marks FAN_ERROR_*,
>>   exposed through the fanotify_mark(2) API.
>> 
>>   fanotify (fsnotify-based) has the infrastructure in-place to link
>>   notifications happening at filesystem objects to mountpoints and to
>>   filesystems, and already exposes an interface with well defined
>>   semantics of how those are exposed to watchers in different
>>   mountpoints or different subtrees.
>> 
>>   A new message format is exposed, if the user passed
>>   FAN_REPORT_DETAILED_ERROR fanotify_init(2) flag.  FAN_ERROR messages
>>   don't have FID/DFID records.
>
> So this depends on the use case. I can imagine that we could introduce new
> event type FAN_FS_ERROR (like we currently have FAN_MODIFY, FAN_ACCESS,
> FAN_CREATE, ...). This event type would be generated when error happens on
> fs object - we can generate it associated with struct file, struct dentry,
> or only superblock, depending on what information is available at the place
> where fs spots the error. Similarly to how we generate other fsnotify
> events. But here's the usecase question - is there really need for fine
> granularity of error reporting (i.e., someone watching errors only on a
> particular file, or directory)?

My use-case requires only watching an entire filesystem, and not
specific files.  I can imagine a use-case of someone watching a specific
mount point, like an application running inside a container is notified
only of errors in files/directories within that container.

>
> Then we can have FAN_REPORT_DETAILED_ERROR which would add to event more
> information about the error like you write below - although identification
> of the inode / fsid is IMO redundant - you can get all that information
> (and more) from FID / DFID event info entries if you want it.

When FID provides a handle, I'd need to get information on that handle
without opening it, as the file might be corrupted.  Would we need an
interface to retrieve inode number by handle without opening the file,
i.e. stat_by_handle?

Thanks,

-- 
Gabriel Krisman Bertazi



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux