On Thu, 23 Jul 2015, Jörn Engel wrote: > > This is wrong, you'd want to do any cond_resched() before the page > > allocation to avoid racing with an update to h->nr_huge_pages or > > h->surplus_huge_pages while hugetlb_lock was dropped that would result in > > the page having been uselessly allocated. > > There are three options. Either > /* some allocation */ > cond_resched(); > or > cond_resched(); > /* some allocation */ > or > if (cond_resched()) { > spin_lock(&hugetlb_lock); > continue; > } > /* some allocation */ > > I think you want the second option instead of the first. That way we > have a little less memory allocation for the time we are scheduled out. > Sure, we can do that. It probably doesn't make a big difference either > way, but why not. > The loop is dropping the lock simply to do the allocation and it needs to compare with the user-written number of hugepages to allocate. What we don't want is to allocate, reschedule, and check if we really needed to allocate. That's what your patch does because it races with persistent_huge_page(). It's probably the worst place to do it. Rather, what you want to do is check if you need to allocate, reschedule if needed (and if so, recheck), and then allocate. > If you are asking for the third option, I would rather avoid that. It > makes the code more complex and doesn't change the fact that we have a > race and better be able to handle the race. The code size growth will > likely cost us more performance that we would ever gain. nr_huge_pages > tends to get updated once per system boot. > Your third option is nonsensical, you didn't save the state of whether you locked the lock so you can't reliably unlock it, and you cannot hold a spinlock while allocating in this context.