Hi Dave
On 7/26/16 11:58 PM, Dave Hansen wrote:
On 07/26/2016 08:44 AM, Jia He wrote:
This patch is to fix such soft lockup. I thouhgt it is safe to call
cond_resched() because alloc_fresh_gigantic_page and alloc_fresh_huge_page
are out of spin_lock/unlock section.
Yikes. So the call site for both the things you patch is this:
while (count > persistent_huge_pages(h)) {
...
spin_unlock(&hugetlb_lock);
if (hstate_is_gigantic(h))
ret = alloc_fresh_gigantic_page(h, nodes_allowed);
else
ret = alloc_fresh_huge_page(h, nodes_allowed);
spin_lock(&hugetlb_lock);
and you choose to patch both of the alloc_*() functions. Why not just
fix it at the common call site? Seems like that
spin_lock(&hugetlb_lock) could be a cond_resched_lock() which would fix
both cases.
I agree to move the cond_resched() to a common site in set_max_huge_pages().
But do you mean the spin_lock in this while loop can be replaced by
cond_resched_lock?
IIUC, cond_resched_lock = spin_unlock+cond_resched+spin_lock.
So could you please explain more details about it? Thanks.
B.R.
Justin
Also, putting that cond_resched() inside the for_each_node*() loop is an
odd choice. It seems to indicate that the loops can take a long time,
which really isn't the case. The _loop_ isn't long, right?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>