Davide Libenzi <davidel@xxxxxxxxxxxxxxx> writes: > On Mon, 1 Jun 2009, Eric W. Biederman wrote: > >> From: Eric W. Biederman <ebiederm@xxxxxxxxxxxxxxxxxxxxxxxxxx> >> >> Signed-off-by: Eric W. Biederman <ebiederm@xxxxxxxxxxxxxxxxxx> >> --- >> fs/eventpoll.c | 39 ++++++++++++++++++++++++++++++++------- >> 1 files changed, 32 insertions(+), 7 deletions(-) > > This patchset gives me the willies for the amount of changes and possible > impact on many subsystems. It both is and is not that bad. It is the cost of adding a lock. For the VFS except for nfsd the I have touched everything that needs to be touched. Other subsystems that open read/write close files should be able to use existing vfs helpers so they don't need to know about the new locking explicitly. Actually taking advantage of this infrastructure in a subsystem is comparatively easy. It took me about an hour to get uio using it. That part is not deep by any means and is opt in. > Without having looked at the details, are you aware that epoll does not > act like poll/select, and fds are not automatically removed (as in, > dequeued from the poll wait queue) in any foreseeable amount of time after > a POLLERR is received? Yes I am aware of how epoll acts differently. > As far as the usespace API goes, they have the right to remain there. I absolutely agree. Currently I have the code acting like close() with respect to epoll and just having the file descriptor vanish at the end of the revoke. While we the revoke is in progress you get an EIO. The file descriptor is not freed by a revoke operation so you can happily hang unto it as long as you want. I thought of doing something more uniform to user space. But I observed that the existing epoll punts on the case of a file descriptor being closed and locking to go from a file to the other epoll datastructures is pretty horrid I said forget it and used the existing close behaviour. Eric -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html