On 2024/9/26 22:58, Matthew Wilcox wrote:
On Thu, Sep 26, 2024 at 10:20:54PM +0800, Kefeng Wang wrote:
On 2024/9/26 21:52, Matthew Wilcox wrote:
On Thu, Sep 26, 2024 at 10:38:34AM +0200, Pankaj Raghav (Samsung) wrote:
So this is why I don't use mapping_set_folio_order_range() here, but
correct me if I am wrong.
Yeah, the inode is active here as the max folio size is decided based on
the write size, so probably mapping_set_folio_order_range() will not be
a safe option.
You really are all making too much of this. Here's the patch I think we
need:
- mapping_set_large_folios(inode->i_mapping);
+ if (sbinfo->huge)
+ mapping_set_large_folios(inode->i_mapping);
But it can't solve all issue, eg,
mount with huge = SHMEM_HUGE_WITHIN_SIZE, or
The page cache will not create folios which overhang the end of the file
by more than the minimum folio size for that mapping. So this is wrong.
Sorry for the late, not very familiar with it, will test after back to
the office in next few days.
mount with SHMEM_HUGE_ALWAYS + runtime SHMEM_HUGE_DENY
That's a tweak to this patch, not a fundamental problem with it.
and the above change will break
mount with SHMEM_HUGE_NEVER + runtime SHMEM_HUGE_FORCE
Likewise.
But the SHMEM_HUGE_DENY/FORCE could be changed at runtime, I don't find
a better way to fix, any more suggestion will be appreciate, thanks.