Re: Accessing mm_rss_stat fields with btf/BPF_CORE_READ_INTO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 23, 2020 at 9:36 AM Yonghong Song <yhs@xxxxxx> wrote:
>
>
>
> On 6/23/20 7:54 AM, Matt Pallissard wrote:
> >
> > On 2020-06-22T15:09:57 -0700, Andrii Nakryiko wrote:
> >> On Mon, Jun 22, 2020 at 10:19 AM Matt Pallissard <matt@xxxxxxxxxxxxxx> wrote:
> >>>
> >>> On 2020-06-22T09:20:03 -0700, Andrii Nakryiko wrote:
> >>>> On Mon, Jun 22, 2020 at 8:01 AM Matt Pallissard <matt@xxxxxxxxxxxxxx> wrote:
> >>>>> On 2020-06-21T08:44:28 -0700, Matt Pallissard wrote:
> >>>>>> On 2020-06-20T20:29:43 -0700, Andrii Nakryiko wrote:
> >>>>>>> On Sat, Jun 20, 2020 at 1:07 PM Matt Pallissard <matt@xxxxxxxxxxxxxx> wrote:
> >>>>>>>> On 2020-06-20T11:11:55 -0700, Yonghong Song wrote:
> >>>>>>>>> On 6/20/20 9:22 AM, Matt Pallissard wrote:
> >>>>>>>>>> New to bpf here.
> >>>>>>>>>>
> >>>>>>>>>> I'm trying to read values out of of mm_struct.  I have code like this;
> >>>>>>>>>>
> >>>>>>>>>> unsigned long i[10] = {};
> >>>>>>>>>> struct task_struct *t;
> >>>>>>>>>> struct mm_rss_stat *rss;
> >>>>>>>>>>
> >>>>>>>>>> t = (struct task_struct *)bpf_get_current_task();
> >>>>>>>>>> BPF_CORE_READ_INTO(&rss, t, mm, rss_stat);
> >>>>>>>>>> BPF_CORE_READ_INTO(i, rss, count);
> >>>>>>>>>>
> >>>>>>>>>> However, all values in `i` appear to be 0 (i[MM_FILEPAGES], etc), as if no data gets copied.  I'm about 100% confident that this is caused by a glaring oversight on my part.
> >>>>>>>>>
> >>>>>>>>> Maybe you want to check the return value of BPF_CORE_READ_INTO.
> >>>>>>>>> Underlying it is using bpf_probe_read and bpf_probe_read may fail e.g., due
> >>>>>>>>> to major fault.
> >>>>>>>>
> >>>>>>>> Doh, I should have known to check the return codes!  Yes, it was failing.  I knew I was overlooking something trivial.
> >>>>>>>>
> >>>>>>>
> >>>>>>> I wrote exactly such piece of code a while ago. Here's part of it for
> >>>>>>> reference, I think it will be helpful:
> >>>>>>>
> >>>>>>>    struct task_struct *task = (struct task_struct *)bpf_get_current_task();
> >>>>>>>    const struct mm_struct *mm = BPF_CORE_READ(task, mm);
> >>>>>>>
> >>>>>>>    if (mm) {
> >>>>>>>        u64 hiwater_rss = BPF_CORE_READ(mm, hiwater_rss);
> >>>>>>>        u64 file_pages = BPF_CORE_READ(mm, rss_stat.count[MM_FILEPAGES].counter);
> >>>>>>>        u64 anon_pages = BPF_CORE_READ(mm, rss_stat.count[MM_ANONPAGES].counter);
> >>>>>>>        u64 shmem_pages = BPF_CORE_READ(mm,
> >>>>>>> rss_stat.count[MM_SHMEMPAGES].counter);
> >>>>>>>        u64 active_rss = file_pages + anon_pages + shmem_pages;
> >>>>>>>        /* ... */
> >>>>>>
> >>>>>> Thank you,
> >>>>>>
> >>>>>> After realizing that I was referencing the struct incorrectly, I wound up with a similar block of code.  However, as I started testing it against /proc/pid/smaps[,_rollup] I noticed that my numbers didn't match up.  Always smaller.
> >>>>>>
> >>>>>> I took a quick glance at fs/proc/task_mmu.c.  I think I'll have to walk some sort of accounting structure.
> >>>>>
> >>>>>
> >>>>> I started to take a hard look at fs/proc/task_mmu.c.  With all the locking, globals, and compile-time constants, I'm not sure that it's even possible to correctly walk `vm_area_struct` in bpf.
> >>>>
> >>>> Yes, you can't take all those locks from BPF. But reading atomic
> >>>> counters from BPF should be no problem. You might get a slightly out
> >>>> of sync readings, but whatever you are doing shouldn't expect to have
> >>>> 100% correct values anyways, because they might change so fast after
> >>>> you read them.
> >>>
> >>> That was my initial thought.  I didn't care to much about stale data, my only real concern was walking vm_area_struct and having memory freed.  I wasn't sure if that could break the list underneath me.  Although, that shouldn't be too difficult to get to the bottom of.
> >>>
> >>
> >> Not sure about vm_area_struct (where is it in the example above?), but
> >> mm_struct won't go away, because current task won't go away, because
> >> BPF program is running in the context of current. Similarly for
> >> bpf_iter, bpf_iter will actually take a refcnt on tast_struct. So I
> >> think you don't have to worry about that.
> >
> > I didn't mention it explicitly in the example above.  But when I originally mentioned walking an accounting structure, as procfs does, it winds up being `mm_struct->mmap,vm_[next,prev]`, with mmap being a `vm_area_struct`.  But, it sounds like I should be abandoning that path and iterating over all the tasks.
> >
> >
> >>>>> If anyone has suggestions for getting memory numbers from an entire process, not just a task/thread, I'd love to hear them.  If not, I'll pursue this on my own.
> >>>>
> >>>> For this, you'd need to iterate across many tasks and aggregate their
> >>>> results based on tasks's tgid. Check iter/task programs in selftests
> >>>> (progs/bpf_iter_task.c, I think).
> >
> >
> > When I try to replicate some of the selftest task logic. I run into some errors when I call bpf_object__load.  `libbpf: task is not found in vmlinux BTF.`  I'll try matching the selftest code more closely and digging into that further.
>
> Somehow libbpf did not prepend `task` with `bpf_iter_` prefix. Not sure
> what is the exact issue. Yes, please mimic what selftests did.
>

It's just an artifact of how libbpf logs error in such case. It did
search for "bpf_iter_task" type, though. But Matt probably doesn't
have a recent enough kernel or didn't build it with
CONFIG_DEBUG_INFO_BTF=y and pahole 1.16+?

> >
> > As an aside; is there any documentation for bpf_iter outside of the selftests?
>
> Unfortunately, no. The commit messages of the original patch set might help.
> https://lore.kernel.org/bpf/20200507053916.1542319-1-yhs@xxxxxx/T/#mf973843af65fc51ac9b3e3673962cd3e87f705e8
>
> >
> > Matt Pallissard
> >




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux