On Tue, 2019-07-23 at 11:11 -0700, Matthew Wilcox wrote: > On Tue, Jul 23, 2019 at 02:05:11PM -0400, Jeff Layton wrote: > > On Tue, 2019-07-23 at 10:55 -0700, Matthew Wilcox wrote: > > > > HCH points out that xfs uses kvfree as a generic "free this no matter > > > > what it is" sort of wrapper and expects the callers to work out whether > > > > they might be freeing a vmalloc'ed address. If that sort of usage turns > > > > out to be prevalent, then we may need another approach to clean this up. > > > > > > I think it's a bit of a landmine, to be honest. How about we have kvfree() > > > call vfree_atomic() instead? > > > > Not a bad idea, though it means more overhead for the vfree case. > > > > Since we're spitballing here...could we have kvfree figure out whether > > it's running in a context where it would need to queue it instead and > > only do it in that case? > > > > We currently have to figure that out for the might_sleep_if anyway. We > > could just have it DTRT instead of printk'ing and dumping the stack in > > that case. > > I don't think we have a generic way to determine if we're currently > holding a spinlock. ie this can fail: > > spin_lock(&my_lock); > kvfree(p); > spin_unlock(&my_lock); > > If we're preemptible, we can check the preempt count, but !CONFIG_PREEMPT > doesn't record the number of spinlocks currently taken. Ahh right...that makes sense. Al also suggested on IRC that we could add a kvfree_atomic if that were useful. That might be good for new callers, but we'd probably need a patch like this one to suss out which of the existing kvfree callers would need to switch to using it. I think you're quite right that this is a landmine. That said, this seems like something we ought to try to clean up. -- Jeff Layton <jlayton@xxxxxxxxxx>