On Tue, 2019-07-23 at 10:55 -0700, Matthew Wilcox wrote: > On Tue, Jul 23, 2019 at 01:52:36PM -0400, Jeff Layton wrote: > > On Tue, 2019-07-23 at 09:12 -0400, Jeff Layton wrote: > > > A lot of callers of kvfree only go down the vfree path under very rare > > > circumstances, and so may never end up hitting the might_sleep_if in it. > > > Ensure that when kvfree is called, that it is operating in a context > > > where it is allowed to sleep. > > > > > > Cc: Alexander Viro <viro@xxxxxxxxxxxxxxxxxx> > > > Cc: Luis Henriques <lhenriques@xxxxxxxx> > > > Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx> > > > --- > > > mm/util.c | 2 ++ > > > 1 file changed, 2 insertions(+) > > > > > > > FWIW, I started looking at this after Luis sent me some ceph patches > > that fixed a few of these problems. I have not done extensive testing > > with this patch, so maybe consider this an RFC for now. > > > > HCH points out that xfs uses kvfree as a generic "free this no matter > > what it is" sort of wrapper and expects the callers to work out whether > > they might be freeing a vmalloc'ed address. If that sort of usage turns > > out to be prevalent, then we may need another approach to clean this up. > > I think it's a bit of a landmine, to be honest. How about we have kvfree() > call vfree_atomic() instead? Not a bad idea, though it means more overhead for the vfree case. Since we're spitballing here...could we have kvfree figure out whether it's running in a context where it would need to queue it instead and only do it in that case? We currently have to figure that out for the might_sleep_if anyway. We could just have it DTRT instead of printk'ing and dumping the stack in that case. -- Jeff Layton <jlayton@xxxxxxxxxx>