On Mon, 31 Mar 2014 19:43:32 +0900 "Mizuma, Masayoshi" <m.mizuma@xxxxxxxxxxxxxx> wrote: > Hi, > > When I decrease the value of nr_hugepage in procfs a lot, softlockup happens. > It is because there is no chance of context switch during this process. > > On the other hand, when I allocate a large number of hugepages, > there is some chance of context switch. Hence softlockup doesn't happen > during this process. So it's necessary to add the context switch > in the freeing process as same as allocating process to avoid softlockup. > > ... > > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1535,6 +1535,7 @@ static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count, > while (min_count < persistent_huge_pages(h)) { > if (!free_pool_huge_page(h, nodes_allowed, 0)) > break; > + cond_resched_lock(&hugetlb_lock); > } > while (count < persistent_huge_pages(h)) { > if (!adjust_pool_surplus(h, nodes_allowed, 1)) Are you sure we don't need a cond_resched_lock() in this second loop as well? Let's bear in mind the objective here: it is to avoid long scheduling stalls, not to prevent softlockup-detector warnings. A piece of code which doesn't trip the lockup detector can still be a problem. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>