On Tue, Apr 07, 2020 at 05:40:05PM +0200, Michal Hocko wrote: > On Tue 07-04-20 08:25:44, Roman Gushchin wrote: > > On Tue, Apr 07, 2020 at 09:03:31AM +0200, Michal Hocko wrote: > > > On Mon 06-04-20 18:04:31, Roman Gushchin wrote: > > > [...] > > > My ack still applies but I have only noticed two minor things now. > > > > Hello, Michal! > > > > > > > > [...] > > > > @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page) > > > > set_compound_page_dtor(page, NULL_COMPOUND_DTOR); > > > > set_page_refcounted(page); > > > > if (hstate_is_gigantic(h)) { > > > > + /* > > > > + * Temporarily drop the hugetlb_lock, because > > > > + * we might block in free_gigantic_page(). > > > > + */ > > > > + spin_unlock(&hugetlb_lock); > > > > destroy_compound_gigantic_page(page, huge_page_order(h)); > > > > free_gigantic_page(page, huge_page_order(h)); > > > > + spin_lock(&hugetlb_lock); > > > > > > This is OK with the current code because existing paths do not have to > > > revalidate the state AFAICS but it is a bit subtle. I have checked the > > > cma_free path and it can only sleep on the cma->lock unless I am missing > > > something. This lock is only used for cma bitmap manipulation and the > > > mutex sounds like an overkill there and it can be replaced by a > > > spinlock. > > > > > > Sounds like a follow up patch material to me. > > > > I had the same idea and even posted a patch: > > https://lore.kernel.org/linux-mm/20200403174559.GC220160@xxxxxxxxxx/T/#m87be98bdacda02cea3dd6759b48a28bd23f29ff0 > > > > However, Joonsoo pointed out that in some cases the bitmap operation might > > be too long for a spinlock. > > I was not aware of this email thread. I will have a look. Thanks! > > > Alternatively, we can implement an asynchronous delayed release on the cma side, > > I just don't know if it's worth it (I mean adding code/complexity). > > > > > > > > [...] > > > > + for_each_node_state(nid, N_ONLINE) { > > > > + int res; > > > > + > > > > + size = min(per_node, hugetlb_cma_size - reserved); > > > > + size = round_up(size, PAGE_SIZE << order); > > > > + > > > > + res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order, > > > > + 0, false, "hugetlb", > > > > + &hugetlb_cma[nid], nid); > > > > + if (res) { > > > > + pr_warn("hugetlb_cma: reservation failed: err %d, node %d", > > > > + res, nid); > > > > + break; > > > > > > Do we really have to break out after a single node failure? There might > > > be other nodes that can satisfy the allocation. You are not cleaning up > > > previous allocations so there is a partial state and then it would make > > > more sense to me to simply s@break@continue@ here. > > > > But then we should iterate over all nodes in alloc_gigantic_page()? > > OK, I've managed to miss the early break on hugetlb_cma[node] == NULL > there as well. I do not think this makes much sense. Just consider a > setup with one node much smaller than others (not unseen on LPAR > configurations) and then you are potentially using CMA areas on some > nodes without a good reason. > > > Currently if hugetlb_cma[0] is NULL it will immediately switch back > > to the fallback approach. > > > > Actually, Idk how realistic are use cases with complex node configuration, > > so that we can hugetlb_cma areas can be allocated only on some of them. > > I'd leave it up to the moment when we'll have a real world example. > > Then we probably want something more sophisticated anyway... > > I do not follow. Isn't the s@break@continue@ in this and > alloc_gigantic_page path enough to make it work? Well, of course it will. But for a highly asymmetrical configuration there is probably not much sense to try allocate cma areas of a similar size on each node and rely on allocation failures on some of them. But, again, if you strictly prefer s/break/continue, I can send a v5. Just let me know. Thanks!