Re: [PATCH bpf 1/2] bpf: fix a rcu_sched stall issue with bpf task/task_file iterator

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 8/18/20 9:48 AM, Andrii Nakryiko wrote:
On Tue, Aug 18, 2020 at 9:26 AM Yonghong Song <yhs@xxxxxx> wrote:

In our production system, we observed rcu stalls when
'bpftool prog` is running.

[...]


Note that `bpftool prog` actually calls a task_file bpf iterator
program to establish an association between prog/map/link/btf anon
files and processes.

In the case where the above rcu stall occured, we had a process
having 1587 tasks and each task having roughly 81305 files.
This implied 129 million bpf prog invocations. Unfortunwtely none of
these files are prog/map/link/btf files so bpf iterator/prog needs
to traverse all these files and not able to return to user space
since there are no seq_file buffer overflow.

The fix is to add cond_resched() during traversing tasks
and files. So voluntarily releasing cpu gives other tasks, e.g.,
rcu resched kthread, a chance to run.

What are the performance implications of doing this for every task
and/or file? Have you benchmarked `bpftool prog` before/after? What
was the difference?

The cond_resched() internally has a condition should_resched()
to check whether rescheduling should be done or not. Most kernel
invocations (if not all) just call cond_resched() without
additional custom logic to guess when to call cond_resched().
I suppose should_resched() should cheaper enough already.

Maybe Rik can comment here.

Regarding to the measurement, I did measure with 'strace -T ./bpftool prog` for 'read' syscall to complete with and without my patch.

e.g.,
read(7, "#\0\0\0\322\23\0\0tcpeventd\0\0\0\0\0\0\0)\0\0\0\322\23\0\0"..., 4096) = 4080 <27.094797>
or
read(7, "#\0\0\0\322\23\0\0tcpeventd\0\0\0\0\0\0\0)\0\0\0\322\23\0\0"..., 4096) = 4080 <34.281563>

The time various a lot during different runs. But based on
my observations, with and without cond_resched(), the range
of read() elapse time roughly the same.


I wonder if it's possible to amortize those cond_resched() and call
them only ever so often, based on CPU time or number of files/tasks
processed, if cond_resched() does turn out to slow bpf_iter down.


Cc: Paul E. McKenney <paulmck@xxxxxxxxxx>
Signed-off-by: Yonghong Song <yhs@xxxxxx>
---
  kernel/bpf/task_iter.c | 4 ++++
  1 file changed, 4 insertions(+)

diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
index f21b5e1e4540..885b14cab2c0 100644
--- a/kernel/bpf/task_iter.c
+++ b/kernel/bpf/task_iter.c
@@ -27,6 +27,8 @@ static struct task_struct *task_seq_get_next(struct pid_namespace *ns,
         struct task_struct *task = NULL;
         struct pid *pid;

+       cond_resched();
+
         rcu_read_lock();
  retry:
         pid = idr_get_next(&ns->idr, tid);
@@ -137,6 +139,8 @@ task_file_seq_get_next(struct bpf_iter_seq_task_file_info *info,
         struct task_struct *curr_task;
         int curr_fd = info->fd;

+       cond_resched();
+
         /* If this function returns a non-NULL file object,
          * it held a reference to the task/files_struct/file.
          * Otherwise, it does not hold any reference.
--
2.24.1




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux