Re: [PATCH bpf-next v3 4/8] bpf: Introduce cgroup iter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 7/20/22 5:40 PM, Hao Luo wrote:
On Mon, Jul 11, 2022 at 8:45 PM Yonghong Song <yhs@xxxxxx> wrote:

On 7/11/22 5:42 PM, Hao Luo wrote:
[...]
+
+static void *cgroup_iter_seq_start(struct seq_file *seq, loff_t *pos)
+{
+    struct cgroup_iter_priv *p = seq->private;
+
+    mutex_lock(&cgroup_mutex);
+
+    /* support only one session */
+    if (*pos > 0)
+        return NULL;

This might be okay. But want to check what is
the practical upper limit for cgroups in a system
and whether we may miss some cgroups. If this
happens, it will be a surprise to the user.


Ok. What's the max number of items supported in a single session?

The max number of items (cgroups) in a single session is determined
by kernel_buffer_size which equals to 8 * PAGE_SIZE. So it really
depends on how much data bpf program intends to send to user space.
If each bpf program run intends to send 64B to user space, e.g., for
cpu, memory, cpu pressure, mem pressure, io pressure, read rate, write
rate, read/write rate. Then each session can support 512 cgroups.


Hi Yonghong,

Sorry about the late reply. It's possible that the number of cgroup
can be large, 1000+, in our production environment. But that may not
be common. Would it be good to leave handling large number of cgroups
as follow up for this patch? If it turns out to be a problem, to
alleviate it, we could:

1. tell users to write program to skip a certain uninteresting cgroups.
2. support requesting large kernel_buffer_size for bpf_iter, maybe as
a new bpf_iter flag.

Currently if we intend to support multiple read() for cgroup_iter,
the following is a very inefficient approach:

in seq_file private data structure, remember the last cgroup visited
and for the second read() syscall, do the traversal again (but not calling bpf program) until the last cgroup and proceed from there.
This is inefficient and probably works. But if the last cgroup is
gone from the hierarchy, that the above approach won't work. One
possibility is to rememobe the last two cgroups. If the last cgroup
is gone, check the 'next' cgroup based on the one before the last
cgroup. If both are gone, we return NULL.

But in any case, if there are additional cgroups not visited,
in the second read(), we should not return NULL which indicates
done with all cgroups. We may return EOPNOTSUPP to indicate there
are missing cgroups due to not supported.

Once users see EOPNOTSUPP which indicates there are missing
cgroups, they can do more filtering in bpf program to avoid
large data volume to user space.

To provide a way to truely visit *all* cgroups,
we can either use bpf_iter link_create->flags
to increase the buffer size as your suggested in the above so
user can try to allocate more kernel buffer size. Or implement
proper second read() traversal which I don't have a good idea
how to do it efficiently.

Hao


[...]
[...]



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux