On 4/3/24 5:03 AM, Muhammad Usama Anjum wrote:
On 4/3/24 7:36 AM, Yonghong Song wrote:
On 4/2/24 8:16 AM, Muhammad Usama Anjum wrote:
Yonghong Song,
Thank you so much for replying. I was missing how to run pipeline manually.
Thanks a ton.
On 4/1/24 11:53 PM, Yonghong Song wrote:
On 4/1/24 5:34 AM, Muhammad Usama Anjum wrote:
Move test_dev_cgroup.c to prog_tests/dev_cgroup.c to be able to run it
with test_progs. Replace dev_cgroup.bpf.o with skel header file,
dev_cgroup.skel.h and load program from it accourdingly.
./test_progs -t dev_cgroup
mknod: /tmp/test_dev_cgroup_null: Operation not permitted
64+0 records in
64+0 records out
32768 bytes (33 kB, 32 KiB) copied, 0.000856684 s, 38.2 MB/s
dd: failed to open '/dev/full': Operation not permitted
dd: failed to open '/dev/random': Operation not permitted
#72 test_dev_cgroup:OK
Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
Signed-off-by: Muhammad Usama Anjum <usama.anjum@xxxxxxxxxxxxx>
---
Changes since v2:
- Replace test_dev_cgroup with serial_test_dev_cgroup as there is
probability that the test is racing against another cgroup test
- Minor changes to the commit message above
I've tested the patch with vmtest.sh on bpf-next/for-next and linux
next. It is passing on both. Not sure why it was failed on BPFCI.
Test run with vmtest.h:
sudo LDLIBS=-static PKG_CONFIG='pkg-config --static' ./vmtest.sh
./test_progs -t dev_cgroup
./test_progs -t dev_cgroup
mknod: /tmp/test_dev_cgroup_null: Operation not permitted
64+0 records in
64+0 records out
32768 bytes (33 kB, 32 KiB) copied, 0.000403432 s, 81.2 MB/s
dd: failed to open '/dev/full': Operation not permitted
dd: failed to open '/dev/random': Operation not permitted
#69 dev_cgroup:OK
Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
The CI failure:
Error: #72 dev_cgroup
serial_test_dev_cgroup:PASS:skel_open_and_load 0 nsec
serial_test_dev_cgroup:PASS:cgroup_setup_and_join 0 nsec
serial_test_dev_cgroup:PASS:bpf_attach 0 nsec
serial_test_dev_cgroup:PASS:bpf_query 0 nsec
serial_test_dev_cgroup:PASS:bpf_query 0 nsec
serial_test_dev_cgroup:PASS:rm 0 nsec
serial_test_dev_cgroup:PASS:mknod 0 nsec
serial_test_dev_cgroup:PASS:rm 0 nsec
serial_test_dev_cgroup:PASS:rm 0 nsec
serial_test_dev_cgroup:FAIL:mknod unexpected mknod: actual 256 !=
expected 0
serial_test_dev_cgroup:PASS:rm 0 nsec
serial_test_dev_cgroup:PASS:dd 0 nsec
serial_test_dev_cgroup:PASS:dd 0 nsec
serial_test_dev_cgroup:PASS:dd 0 nsec
(cgroup_helpers.c:353: errno: Device or resource busy) umount cgroup2
The error code 256 means mknod execution has some issues. Maybe you need to
find specific errno to find out what is going on. I think you can do ci
on-demanding test to debug.
errno is 2 --> No such file or directory
Locally I'm unable to reproduce it until I don't remove
rm -f /tmp/test_dev_cgroup_zero such that the /tmp/test_dev_cgroup_zero
node is present before test execution. The error code is 256 with errno 2.
I'm debugging by placing system("ls /tmp 1>&2"); to find out which files
are already present in /tmp. But ls's output doesn't appear on the CI logs.
errno 2 means ENOENT.
From mknod man page (https://linux.die.net/man/2/mknod), it means
A directory component in/pathname/ does not exist or is a dangling
symbolic link.
It means /tmp does not exist or a dangling symbolic link.
It is indeed very strange. To make the test robust, maybe creating a temp
directory with mkdtemp and use it as the path? The temp directory
creation should be done before bpf prog attach.
I've tried following but still no luck:
* /tmp is already present. Then I thought maybe the desired file is already
present. I've verified that there isn't file of same name is present inside
/tmp.
* I thought maybe mknod isn't present in the system. But mknod --help succeeds.
* I switched from /tmp to current directory to create the mknod. But the
result is same error.
* I've tried to use the same kernel config as the BPF CI is using. I'm not
able to reproduce it.
Not sure which edge case or what's going on. The problem is appearing
because of some limitation in the rootfs.
Maybe you could collect /tmp mount options to see whether anything is
suspicious? In my vm, I have
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,size=3501540k,nr_inodes=1048576)
and the test works fine.