On Mon, Feb 06, 2023, David Matlack wrote: > On Thu, Jan 26, 2023 at 12:43:46AM +0000, Kevin Cheng wrote: > > + int n = atoi(argv[1]); > > + > > + for (int i = 0; i < n; i++) { > > + if (fork() == 0) { > > Put the implementation of the child process into a helper function to > reduce indentation. +1 for the future, but for this sample test I wouldn't bother having the test spawn multiple VMs. IIUC, each child loads its own BPF program, i.e. the user can achieve the same effect by running multiple instances of the test. > > + struct kvm_vm *vm; > > + struct kvm_vcpu *vcpu; > > + > > + vm = vm_create_with_one_vcpu(&vcpu, guest_code); > > + > > + // BPF userspace code > > + struct bpf_object *obj; > > + struct bpf_program *prog; > > + struct bpf_map *map_obj; > > + struct bpf_link *link = NULL; > > + > > + obj = bpf_object__open_file("kvm_vmx_exit_ebpf_kern.o", NULL); Does the BPF program _have_ to be in an intermediate object file? E.g. can the program be embedded in the selftest? > > + if (libbpf_get_error(obj)) { > > + fprintf(stderr, "ERROR: opening BPF object file failed\n"); > > + return 0; > > I notice the children and parent always return 0. The test should exit > with a non-0 return code if it fails. Just do TEST_ASSERT(), I don't see any reason to gracefully exit on failure. > > + } > > + > > + map_obj = bpf_object__find_map_by_name(obj, "vmx_exit_map"); > > + if (!map_obj) { > > + fprintf(stderr, "ERROR: loading of vmx BPF map failed\n"); > > + goto cleanup; > > + } > > + > > + struct bpf_map *pid_map = bpf_object__find_map_by_name(obj, "pid_map"); > > + > > + if (!pid_map) { > > + fprintf(stderr, "ERROR: loading of pid BPF map failed\n"); > > + goto cleanup; > > + } > > + > > + /* load BPF program */ ... > > + for (int j = 0; j < 10000; j++) > > + vcpu_run(vcpu); > > It might be interesting to (1) add some timing around this loop and (2) > run this loop without any bpf programs attached. i.e. Automatically do > an A/B performance comparison with and without bpf programs. Hmm, I agree that understanding the performance impact of BPF is interesting, but I don't think this is the right place to implement one-off code. I would rather we add infrastructure to allow selftests to gather timing statistics for run loops like this, e.g. to capture percentiles, outliers, etc., and possibly to try to mitigate external influences, e.g. pin the task to prevent migration and/or filter out samples that appeared to be "bad" due to the task getting interrupted. In other words, I worry that any amount of scope creep beyond "here's a BPF demo" will snowball quickly :-) > > diff --git a/tools/testing/selftests/kvm/kvm_vmx_exit_ebpf_kern.c b/tools/testing/selftests/kvm/kvm_vmx_exit_ebpf_kern.c > > new file mode 100644 > > index 000000000000..b9c076f93171 > > --- /dev/null > > +++ b/tools/testing/selftests/kvm/kvm_vmx_exit_ebpf_kern.c > > I think we should carve out a new directory for bpf programs. If we mix > this in with the selftest .c files, it will start to get confusing. > > e.g. tools/testing/selftests/kvm/bpf/vmx_exit_count.c +1, though it should probably be tools/testing/selftests/kvm/bpf/x86_64/vmx_exit_count.c Though we should rename the arch directories before we gain more usage of the bad names[*] And I suspect we'll also want add lib helpers, e.g. tools/testing/selftests/kvm/lib/bpf.{c,h}, possibly with per-arch files too. E.g. to wrap bpf_object__open_file() and one or more bpf_object__find_map_by_name() calls. [*] https://lore.kernel.org/all/Y5jadzKz6Qi9MiI9@xxxxxxxxxx