On Tue 20-02-24 10:26:48, Daniel Gomez wrote: > On Mon, Feb 19, 2024 at 02:15:47AM -0800, Hugh Dickins wrote: > I'm uncertain when we may want to be more elastic. In the case of XFS with iomap > and support for large folios, for instance, we are 'less' elastic than here. So, > what exactly is the rationale behind wanting shmem to be 'more elastic'? Well, but if you allocated space in larger chunks - as is the case with ext4 and bigalloc feature, you will be similarly 'elastic' as tmpfs with large folio support... So simply the granularity of allocation of underlying space is what matters here. And for tmpfs the underlying space happens to be the page cache. > If we ever move shmem to large folios [1], and we use them in an oportunistic way, > then we are going to be more elastic in the default path. > > [1] https://lore.kernel.org/all/20230919135536.2165715-1-da.gomez@xxxxxxxxxxx > > In addition, I think that having this block granularity can benefit quota > support and the reclaim path. For example, in the generic/100 fstest, around > ~26M of data are reported as 1G of used disk when using tmpfs with huge pages. And I'd argue this is a desirable thing. If 1G worth of pages is attached to the inode, then quota should be accounting 1G usage even though you've written just 26MB of data to the file. Quota is about constraining used resources, not about "how much did I write to the file". Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR