Re: [PATCH v2] mm: anonymous shared memory naming

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 9, 2022 at 5:11 AM David Hildenbrand <david@xxxxxxxxxx> wrote:
>
> >>
> >>>     anon_shmem = mmap(NULL, SIZE, PROT_READ | PROT_WRITE,
> >>>                       MAP_SHARED | MAP_ANONYMOUS, -1, 0);
> >>>     /* Name the segment: "MY-NAME" */
> >>>     rv = prctl(PR_SET_VMA, PR_SET_VMA_ANON_NAME,
> >>>                anon_shmem, SIZE, "MY-NAME");
> >>>
> >>> cat /proc/<pid>/maps (and smaps):
> >>> 7fc8e2b4c000-7fc8f2b4c000 rw-s 00000000 00:01 1024 [anon_shmem:MY-NAME]
> >>
> >> What would it have looked like before? Just no additional information?
> >
> > Before:
> >
> > 7fc8e2b4c000-7fc8f2b4c000 rw-s 00000000 00:01 1024 /dev/zero (deleted)
>
> Can we add that to the patch description?
>
> >>
> >>>
> >>> Signed-off-by: Pasha Tatashin <pasha.tatashin@xxxxxxxxxx>
> >>> ---
> >>
> >>
> >> [...]
> >>
> >>> diff --git a/include/linux/mm.h b/include/linux/mm.h
> >>> index 8bbcccbc5565..06b6fb3277ab 100644
> >>> --- a/include/linux/mm.h
> >>> +++ b/include/linux/mm.h
> >>> @@ -699,8 +699,10 @@ static inline unsigned long vma_iter_addr(struct vma_iterator *vmi)
> >>>     * paths in userfault.
> >>>     */
> >>>    bool vma_is_shmem(struct vm_area_struct *vma);
> >>> +bool vma_is_anon_shmem(struct vm_area_struct *vma);
> >>>    #else
> >>>    static inline bool vma_is_shmem(struct vm_area_struct *vma) { return false; }
> >>> +static inline bool vma_is_anon_shmem(struct vm_area_struct *vma) { return false; }
> >>>    #endif
> >>>
> >>>    int vma_is_stack_for_current(struct vm_area_struct *vma);
> >>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> >>> index 500e536796ca..08d8b973fb60 100644
> >>> --- a/include/linux/mm_types.h
> >>> +++ b/include/linux/mm_types.h
> >>> @@ -461,21 +461,11 @@ struct vm_area_struct {
> >>>         * For areas with an address space and backing store,
> >>>         * linkage into the address_space->i_mmap interval tree.
> >>>         *
> >>> -      * For private anonymous mappings, a pointer to a null terminated string
> >>> -      * containing the name given to the vma, or NULL if unnamed.
> >>>         */
> >>> -
> >>> -     union {
> >>> -             struct {
> >>> -                     struct rb_node rb;
> >>> -                     unsigned long rb_subtree_last;
> >>> -             } shared;
> >>> -             /*
> >>> -              * Serialized by mmap_sem. Never use directly because it is
> >>> -              * valid only when vm_file is NULL. Use anon_vma_name instead.
> >>> -              */
> >>> -             struct anon_vma_name *anon_name;
> >>> -     };
> >>> +     struct {
> >>> +             struct rb_node rb;
> >>> +             unsigned long rb_subtree_last;
> >>> +     } shared;
> >>>
> >>
> >> So that effectively grows the size of vm_area_struct. Hm. I'd really
> >> prefer to keep this specific to actual anonymous memory, not extending
> >> it to anonymous files.
> >
> > It grows only when CONFIG_ANON_VMA_NAME=y, otherwise it stays the same
> > as before. Are you suggesting adding another config specifically for
> > shared memory? I wonder if we could add a union for some other part of
> > vm_area_struct where anon and file cannot be used together.
>
> In practice, all distributions will enable CONFIG_ANON_VMA_NAME in the
> long term I guess. So if we could avoid increasing the VMA size, that
> would be great.
>
> >
> >> Do we have any *actual* users where we don't have an alternative? I
> >> doubt that this is really required.
> >>
> >> The simplest approach seems to be to use memfd instead of MAP_SHARED |
> >> MAP_ANONYMOUS. __NR_memfd_create can be passed a name and you get what
> >> you propose here effectively already. Or does anything speak against it?
> >
> > For our use case the above does not work. We are working on highly
> > paravirtualized virtual machines. The VMM maps VM memory as anonymous
> > shared memory (not private because VMM is sandboxed and drivers are
> > running in their own processes). However, the VM tells back to the VMM
> > how parts of the memory are actually used by the guest, how each of
> > the segments should be backed (i.e. 4K pages, 2M pages), and some
> > other information about the segments. The naming allows us to monitor
> > the effective memory footprint for each of these segments from the
> > host without looking inside the guest.
>
> That's a reasonable use case, although naive me would worry about #VMA
> limits etc.
>
> Can you add some condensed use-case explanation to the patch
> description? (IOW, memfd cannot be used because parts of the memfd are
> required to receive distinct names)
>
> I'd appreciate if we could avoid increasing the VMA size; but in any case

I've explored ways not to increase VMA size, but there are no obvious
solutions here. Let's keep it as is for now, and in the future if
there we are going to be adding some fields that are only used by
anonymous memory, we can explore of adding a union for this field.

>
> Acked-by: David Hildenbrand <david@xxxxxxxxxx>

Thank you. I will soon send a new version with support for memfd anon
memory as well.

Pasha



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux