On 2025/1/8 10:29, Andrew Morton wrote:
On Wed, 8 Jan 2025 10:16:49 +0800 Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> wrote:
+static struct folio *shmem_swap_alloc_folio(struct inode *inode,
+ struct vm_area_struct *vma, pgoff_t index,
+ swp_entry_t entry, int order, gfp_t gfp)
+{
+ struct shmem_inode_info *info = SHMEM_I(inode);
+ struct folio *new;
+ void *shadow;
+ int nr_pages;
+
+ /*
+ * We have arrived here because our zones are constrained, so don't
+ * limit chance of success by further cpuset and node constraints.
+ */
+ gfp &= ~GFP_CONSTRAINT_MASK;
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ if (order > 0) {
+ gfp_t huge_gfp = vma_thp_gfp_mask(vma);
+
+ gfp = limit_gfp_mask(huge_gfp, gfp);
+ }
+#endif
+
Can we do this?
--- a/mm/shmem.c~mm-shmem-skip-swapcache-for-swapin-of-synchronous-swap-device-fix
+++ a/mm/shmem.c
@@ -1978,16 +1978,14 @@ static struct folio *shmem_swap_alloc_fo
/*
* We have arrived here because our zones are constrained, so don't
- * limit chance of success by further cpuset and node constraints.
+ * limit chance of success with further cpuset and node constraints.
*/
gfp &= ~GFP_CONSTRAINT_MASK;
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- if (order > 0) {
+ if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && order > 0) {
gfp_t huge_gfp = vma_thp_gfp_mask(vma);
gfp = limit_gfp_mask(huge_gfp, gfp);
}
-#endif
new = shmem_alloc_folio(gfp, order, info, index);
if (!new)
_
Yes, looks good to me. Thanks.