Hey Ryan,
On 2024/7/4 21:58, Ryan Roberts wrote:
Then for tmpfs, which doesn't support non-PMD-sizes yet, we just always use the
PMD-size control for decisions.
I'm also really struggling with the concept of shmem_is_huge() existing along
side shmem_allowable_huge_orders(). Surely this needs to all be refactored into
shmem_allowable_huge_orders()?
I understood. But now they serve different purposes: shmem_is_huge() will be
used to check the huge orders for the top level, for*tmpfs* and anon shmem;
whereas shmem_allowable_huge_orders() will only be used to check the per-size
huge orders for anon shmem (excluding tmpfs now). However, as I plan to add mTHP
support for tmpfs, I think we can perform some cleanups.
+ /* Allow mTHP that will be fully within i_size. */
+ order = highest_order(within_size_orders);
+ while (within_size_orders) {
+ index = round_up(index + 1, order);
+ i_size = round_up(i_size_read(inode), PAGE_SIZE);
+ if (i_size >> PAGE_SHIFT >= index) {
+ mask |= within_size_orders;
+ break;
+ }
+
+ order = next_order(&within_size_orders, order);
+ }
+
+ if (vm_flags & VM_HUGEPAGE)
+ mask |= READ_ONCE(huge_shmem_orders_madvise);
+
+ if (global_huge)
Perhaps I've misunderstood global_huge, but I think its just the return value
from shmem_is_huge()? But you're also using shmem_huge directly in this
Yes.
function. I find it all rather confusing.
I think I have explained why need these logics as above. Since mTHP support for
shmem has just started (tmpfs is still in progress). I will make it more clear
in the following patches.
OK as long as you have a plan for the clean up, that's good enough for me.
Can I continue to push the following patch [1]? When other types of
shmem mTHP
are supported, we will perform cleanups uniformly.
[1]
https://lore.kernel.org/linux-mm/20240702023401.41553-1-libang.li@xxxxxxxxxxxx/
Thanks,
Bang