On Mon, Sep 23, 2024 at 09:39:07AM +0800, Kefeng Wang wrote: > > > On 2024/9/22 8:35, Matthew Wilcox wrote: > > On Fri, Sep 20, 2024 at 10:36:54PM +0800, Kefeng Wang wrote: > > > The tmpfs supports large folio, but there is some configurable options > > > to enable/disable large folio allocation, and for huge=within_size, > > > large folio only allowabled if it fully within i_size, so there is > > > performance issue when perform write without large folio, the issue is > > > similar to commit 4e527d5841e2 ("iomap: fault in smaller chunks for > > > non-large folio mappings"). > > > > No. What's wrong with my earlier suggestion? > > > > The tempfs has mount options(never/always/within_size/madvise) for large > folio, also has sysfs file /sys/kernel/mm/transparent_hugepage/shmem_enabled > to deny/force large folio at runtime, as replied in v1, I think it > breaks the rules of mapping_set_folio_order_range(), > > "Do not tune it based on, eg, i_size." > --- for tmpfs, it does choose large folio or not based on the i_size > > "Context: This should not be called while the inode is active as it is > non-atomic." > --- during perform write, the inode is active > > So this is why I don't use mapping_set_folio_order_range() here, but > correct me if I am wrong. Yeah, the inode is active here as the max folio size is decided based on the write size, so probably mapping_set_folio_order_range() will not be a safe option.