Re: [PATCH 2/5] fs/procfs: implement efficient VMA querying API for /proc/<pid>/maps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 6, 2024 at 1:35 PM Arnaldo Carvalho de Melo <acme@xxxxxxxxxx> wrote:
>
> On Mon, May 06, 2024 at 11:41:43AM -0700, Andrii Nakryiko wrote:
> > On Mon, May 6, 2024 at 6:58 AM Arnaldo Carvalho de Melo <acme@xxxxxxxxxx> wrote:
> > >
> > > On Sat, May 04, 2024 at 02:50:31PM -0700, Andrii Nakryiko wrote:
> > > > On Sat, May 4, 2024 at 8:28 AM Greg KH <gregkh@xxxxxxxxxxxxxxxxxxx> wrote:
> > > > > On Fri, May 03, 2024 at 05:30:03PM -0700, Andrii Nakryiko wrote:
> > > > > > Note also, that fetching VMA name (e.g., backing file path, or special
> > > > > > hard-coded or user-provided names) is optional just like build ID. If
> > > > > > user sets vma_name_size to zero, kernel code won't attempt to retrieve
> > > > > > it, saving resources.
> > >
> > > > > > Signed-off-by: Andrii Nakryiko <andrii@xxxxxxxxxx>
> > >
> > > > > Where is the userspace code that uses this new api you have created?
> > >
> > > > So I added a faithful comparison of existing /proc/<pid>/maps vs new
> > > > ioctl() API to solve a common problem (as described above) in patch
> > > > #5. The plan is to put it in mentioned blazesym library at the very
> > > > least.
> > > >
> > > > I'm sure perf would benefit from this as well (cc'ed Arnaldo and
> > > > linux-perf-user), as they need to do stack symbolization as well.
> > >
> > > At some point, when BPF iterators became a thing we thought about, IIRC
> > > Jiri did some experimentation, but I lost track, of using BPF to
> > > synthesize PERF_RECORD_MMAP2 records for pre-existing maps, the layout
> > > as in uapi/linux/perf_event.h:
> > >
> > >         /*
> > >          * The MMAP2 records are an augmented version of MMAP, they add
> > >          * maj, min, ino numbers to be used to uniquely identify each mapping
> > >          *
> > >          * struct {
> > >          *      struct perf_event_header        header;
> > >          *
> > >          *      u32                             pid, tid;
> > >          *      u64                             addr;
> > >          *      u64                             len;
> > >          *      u64                             pgoff;
> > >          *      union {
> > >          *              struct {
> > >          *                      u32             maj;
> > >          *                      u32             min;
> > >          *                      u64             ino;
> > >          *                      u64             ino_generation;
> > >          *              };
> > >          *              struct {
> > >          *                      u8              build_id_size;
> > >          *                      u8              __reserved_1;
> > >          *                      u16             __reserved_2;
> > >          *                      u8              build_id[20];
> > >          *              };
> > >          *      };
> > >          *      u32                             prot, flags;
> > >          *      char                            filename[];
> > >          *      struct sample_id                sample_id;
> > >          * };
> > >          */
> > >         PERF_RECORD_MMAP2                       = 10,
> > >
> > >  *   PERF_RECORD_MISC_MMAP_BUILD_ID      - PERF_RECORD_MMAP2 event
> > >
> > > As perf.data files can be used for many purposes we want them all, so we
> >
> > ok, so because you want them all and you don't know which VMAs will be
> > useful or not, it's a different problem. BPF iterators will be faster
> > purely due to avoiding binary -> text -> binary conversion path, but
> > other than that you'll still retrieve all VMAs.
>
> But not using tons of syscalls to parse text data from /proc.

In terms of syscall *count* you win with 4KB text reads, there are
fewer syscalls because of this 4KB-based batching. But the cost of
syscall + amount of user-space processing is a different matter. My
benchmark in perf (see patch #5 discussion) suggests that even with
more ioctl() syscalls, perf would win here.

But I also realized that what you really need (I think, correct me if
I'm wrong) is only file-backed VMAs, because all the other ones are
not that useful for symbolization. So I'm adding a minimal change to
my code to allow the user to specify another query flag to only return
file-backed VMAs. I'm going to try it with perf code and see how that
helps. I'll post results in patch #5 thread, once I have them.

>
> > You can still do the same full VMA iteration with this new API, of
> > course, but advantages are probably smaller as you'll be retrieving a
> > full set of VMAs regardless (though it would be interesting to compare
> > anyways).
>
> sure, I can't see how it would be faster, but yeah, interesting to see
> what is the difference.

see patch #5 thread, seems like it's still a bit faster

>
> > > setup a meta data perf file descriptor to go on receiving the new mmaps
> > > while we read /proc/<pid>/maps, to reduce the chance of missing maps, do
> > > it in parallel, etc:
> > >
> > > ⬢[acme@toolbox perf-tools-next]$ perf record -h 'event synthesis'
> > >
> > >  Usage: perf record [<options>] [<command>]
> > >     or: perf record [<options>] -- <command> [<options>]
> > >
> > >         --num-thread-synthesize <n>
> > >                           number of threads to run for event synthesis
> > >         --synth <no|all|task|mmap|cgroup>
> > >                           Fine-tune event synthesis: default=all
> > >
> > > ⬢[acme@toolbox perf-tools-next]$
> > >
> > > For this specific initial synthesis of everything the plan, as mentioned
> > > about Jiri's experiments, was to use a BPF iterator to just feed the
> > > perf ring buffer with those events, that way userspace would just
> > > receive the usual records it gets when a new mmap is put in place, the
> > > BPF iterator would just feed the preexisting mmaps, as instructed via
> > > the perf_event_attr for the perf_event_open syscall.
> > >
> > > For people not wanting BPF, i.e. disabling it altogether in perf or
> > > disabling just BPF skels, then we would fallback to the current method,
> > > or to the one being discussed here when it becomes available.
> > >
> > > One thing to have in mind is for this iterator not to generate duplicate
> > > records for non-pre-existing mmaps, i.e. we would need some generation
> > > number that would be bumped when asking for such pre-existing maps
> > > PERF_RECORD_MMAP2 dumps.
> >
> > Looking briefly at struct vm_area_struct, it doesn't seems like the
> > kernel maintains any sort of generation (at least not at
> > vm_area_struct level), so this would be nice to have, I'm sure, but
>
> Yeah, this would be something specific to the "retrieve me the list of
> VMAs" bulky thing, i.e. the kernel perf code (or the BPF that would
> generate the PERF_RECORD_MMAP2 records by using a BPF vma iterator)
> would bump the generation number and store it to the VMA in
> perf_event_mmap() so that the iterator doesn't consider it, as it is a
> new mmap that is being just sent to whoever is listening, and the perf
> tool that put in place the BPF program to iterate is listening.

Ok, we went on *so many* tangents in emails on this patch set :) Seems
like there are a bunch of perf-specific improvements possible which
are completely irrelevant to the API I'm proposing. Let's please keep
them separate (and you, perf folks, should propose them upstream),
it's getting hard to see what this patch set is actually about with
all the tangential emails.

>
> > isn't really related to adding this API. Once the kernel does have
>
> Well, perf wants to enumerate pre-existing mmaps _and_ after that
> finishes to know about new mmaps, so we need to know a way to avoid
> having the BPF program enumerating pre-existing maps sending
> PERF_RECORD_MMAP2 for maps perf already knows about via a regular
> PERF_RECORD_MMAP2 sent when a new mmap is put in place.
>
> So there is an overlap where perf (or any other tool wanting to
> enumerate all pre-existing maps and new ones) can receive info for the
> same map from the enumerator and from the existing mechanism generating
> PERF_RECORD_MMAP2 records.
>
> - Arnaldo
>
> > this "VMA generation" counter, it can be trivially added to this
> > binary interface (which can't be said about /proc/<pid>/maps,
> > unfortunately).
> >
> > >
> > > > It will be up to other similar projects to adopt this, but we'll
> > > > definitely get this into blazesym as it is actually a problem for the
> > >
> > > At some point looking at plugging blazesym somehow with perf may be
> > > something to consider, indeed.
> >
> > In the above I meant direct use of this new API in perf code itself,
> > but yes, blazesym is a generic library for symbolization that handles
> > ELF/DWARF/GSYM (and I believe more formats), so it indeed might make
> > sense to use it.
> >
> > >
> > > - Arnaldo
> > >
> > > > abovementioned Oculus use case. We already had to make a tradeoff (see
> > > > [2], this wasn't done just because we could, but it was requested by
> > > > Oculus customers) to cache the contents of /proc/<pid>/maps and run
> > > > the risk of missing some shared libraries that can be loaded later. It
> > > > would be great to not have to do this tradeoff, which this new API
> > > > would enable.
> > > >
> > > >   [2] https://github.com/libbpf/blazesym/commit/6b521314126b3ae6f2add43e93234b59fed48ccf
> > > >
> >
> > [...]





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux