On Mon 19-06-17 04:46:00, Tahsin Erdogan wrote: > >> I tried that approach by adding a "int get_inode_usage(struct inode > >> *inode, qsize_t *usage)" callback to dquot_operations. Unfortunately, > >> ext4 code that calculates the number of internal inodes > >> (ext4_xattr_inode_count()) is subject to failures so the callback has > >> to be able to report errors. And, that itself is problematic because > >> we can't afford to have errors in dquot_free_inode(). If you have > >> thoughts about how to address this please let me know. > > > > Well, you can just make dquot_free_inode() return error. Now most callers > > won't be able to do much with an error from dquot_free_inode() but that's > > the case also for other things during inode deletion - just handle it as > > other fatal failures during inode freeing. > > > I just checked dquot_free_inode() to see whether it calls anything > that could fail. It calls mark_all_dquot_dirty() and ignores the > return code from it. I would like to follow the same for the > get_inode_usage() as the only use case for get_inode_usage() (ext4) > should not fail at inode free time. > > Basically, I want to avoid changing return type from void to int > because it would create a new responsibility for the filesystem > implementations who do not know how to deal with it. Heh, this "pushing of responsibility" looks like a silly game. If an error can happen in a function, it is better to report it as far as easily possible (unless we can cleanly handle it which we cannot here). I'm guilty of making dquot_free_inode() ignore errors from mark_all_dquot_dirty() and in retrospect it would have been better if these were propagated to the caller as well. And eventually we can fix this if we decide we care enough. I'm completely fine with just returning an error from dquot_free_inode() and ignore it in all the callers except for ext4. Then filesystems which care enough can try to handle the error. That way we at least don't increase the design debt from the past. Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR