Re: [PATCH v5 0/6] add mTHP support for anonymous shmem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16.07.24 15:11, Daniel Gomez wrote:
On Tue, Jul 09, 2024 at 09:28:48AM GMT, Ryan Roberts wrote:
On 07/07/2024 17:39, Daniel Gomez wrote:
On Fri, Jul 05, 2024 at 10:59:02AM GMT, David Hildenbrand wrote:
On 05.07.24 10:45, Ryan Roberts wrote:
On 05/07/2024 06:47, Baolin Wang wrote:


On 2024/7/5 03:49, Matthew Wilcox wrote:
On Thu, Jul 04, 2024 at 09:19:10PM +0200, David Hildenbrand wrote:
On 04.07.24 21:03, David Hildenbrand wrote:
shmem has two uses:

      - MAP_ANONYMOUS | MAP_SHARED (this patch set)
      - tmpfs

For the second use case we don't want controls *at all*, we want the
same heiristics used for all other filesystems to apply to tmpfs.

As discussed in the MM meeting, Hugh had a different opinion on that.

FWIW, I just recalled that I wrote a quick summary:

https://lkml.kernel.org/r/f1783ff0-65bd-4b2b-8952-52b6822a0835@xxxxxxxxxx

I believe the meetings are recorded as well, but never looked at recordings.

That's not what I understood Hugh to mean.  To me, it seemed that Hugh
was expressing an opinion on using shmem as shmem, not as using it as
tmpfs.

If I misunderstood Hugh, well, I still disagree.  We should not have
separate controls for this.  tmpfs is just not that special.

I wasn't at the meeting that's being referred to, but I thought we previously
agreed that tmpfs *is* special because in some configurations its not backed by
swap so is locked in ram?

There are multiple things to that, like:

* Machines only having limited/no swap configured
* tmpfs can be configured to never go to swap
* memfd/tmpfs files getting used purely for mmap(): there is no real
   difference to MAP_ANON|MAP_SHARE besides the processes we share that
   memory with.

Especially when it comes to memory waste concerns and access behavior in
some cases, tmpfs behaved much more like anonymous memory. But there are for
sure other use cases where tmpfs is not that special.

Having controls to select the allowable folio order allocations for
tmpfs does not address any of these issues. The suggested filesystem
approach [1] involves allocating orders in larger chunks, but always
the same size you would allocate when using order-0 folios.

Well you can't know that you will never allocate more. If you allocate a 2M

In the fs large folio approach implementation [1], the allocation of a 2M (or
any non order-0) occurs when the size of the write/fallocate is 2M (and index
is aligned).

I don't have time right now follow the discussion in detail here (I thought we had a meeting to discuss that and received guidance from Hugh?), but I'll point out two things:

(1) We need a reasonable model for handling/allocating of large folios
    during page faults. shmem/tmpfs can be used just like anon-shmem if
    you simply only mmap that thing (hello VMs!).

(2) Hugh gave (IMHO) clear feedback during the meeting how he thinks we
    should approach large folios in shmem.

Maybe I got (2) all wrong and people can point out all the issues in my summary from the meeting.

Otherwise, if people don't want to accept the result from that meeting, we need further guidance from Hugh.

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux