Re: [RFC PATCH 2/3] mm: hugetlb: make the hugetlb migration strategy consistent

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2024/2/23 06:15, Oscar Salvador wrote:
On Wed, Feb 21, 2024 at 05:27:54PM +0800, Baolin Wang wrote:
Based on the analysis of the various scenarios above, determine whether fallback is
permitted according to the migration reason in alloc_hugetlb_folio_nodemask().

Hi Baolin,

The high level reasoning makes sense to me, taking a step back and
thinking about all cases and possible outcomes makes sense to me.

I plan to look closer, but I something that caught my eye:

Thanks for reviewing.

  	}
  	spin_unlock_irq(&hugetlb_lock);
+ if (gfp_mask & __GFP_THISNODE)
+		goto alloc_new;
+
+	/*
+	 * Note: the memory offline, memory failure and migration syscalls can break
+	 * the per-node hugetlb pool. Other cases can not allocate new hugetlb on
+	 * other nodes.
+	 */
+	switch (reason) {
+	case MR_MEMORY_HOTPLUG:
+	case MR_MEMORY_FAILURE:
+	case MR_SYSCALL:
+	case MR_MEMPOLICY_MBIND:
+		allowed_fallback = true;
+		break;
+	default:
+		break;
+	}
+
+	if (!allowed_fallback)
+		gfp_mask |= __GFP_THISNODE;

I think it would be better if instead of fiddling with gfp here,
have htlb_alloc_mask() have a second argument with the MR_reason,
do the switch there and enable GFP_THISNODE.
Then alloc_hugetlb_folio_nodemask() would already get the right mask. >
I think that that might be more clear as it gets encapsulated in the
function that directly gives us the gfp.

Does that makes sense?

I previously considered passing the MR_reason argument to the htlb_modify_alloc_mask(), which is only used by hugetlb migration. But in alloc_hugetlb_folio_nodemask(), if there are available hugetlb on other nodes, we should allow migrating, that will not break the per-node hugetlb pool.

That's why I just change the gfp_mask for allocating a new hguetlb when migration, that can break the pool.

struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
		nodemask_t *nmask, gfp_t gfp_mask)
{
	spin_lock_irq(&hugetlb_lock);
	if (available_huge_pages(h)) {
		struct folio *folio;

		folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask,
						preferred_nid, nmask);
		if (folio) {
			spin_unlock_irq(&hugetlb_lock);
			return folio;
		}
	}
......




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux