On Wed, Oct 6, 2021 at 11:56 AM Yucong Sun <fallentree@xxxxxx> wrote: > > From: Yucong Sun <sunyucong@xxxxxxxxx> > > This make this test more likely to succeed. > > Signed-off-by: Yucong Sun <sunyucong@xxxxxxxxx> > --- 100 million iterations seems a bit excessive. Why one million loops doesn't cause a single perf event? Can we make it more robust in some other way that is not as slow? I've dropped it for now while we discuss. > tools/testing/selftests/bpf/prog_tests/perf_branches.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/tools/testing/selftests/bpf/prog_tests/perf_branches.c b/tools/testing/selftests/bpf/prog_tests/perf_branches.c > index 6b2e3dced619..d7e88b2c5f36 100644 > --- a/tools/testing/selftests/bpf/prog_tests/perf_branches.c > +++ b/tools/testing/selftests/bpf/prog_tests/perf_branches.c > @@ -16,7 +16,7 @@ static void check_good_sample(struct test_perf_branches *skel) > int duration = 0; > > if (CHECK(!skel->bss->valid, "output not valid", > - "no valid sample from prog")) > + "no valid sample from prog\n")) > return; > > /* > @@ -46,7 +46,7 @@ static void check_bad_sample(struct test_perf_branches *skel) > int duration = 0; > > if (CHECK(!skel->bss->valid, "output not valid", > - "no valid sample from prog")) > + "no valid sample from prog\n")) > return; > > CHECK((required_size != -EINVAL && required_size != -ENOENT), > @@ -84,7 +84,7 @@ static void test_perf_branches_common(int perf_fd, > if (CHECK(err, "set_affinity", "cpu #0, err %d\n", err)) > goto out_destroy; > /* spin the loop for a while (random high number) */ > - for (i = 0; i < 1000000; ++i) > + for (i = 0; i < 100000000; ++i) > ++j; > > test_perf_branches__detach(skel); > -- > 2.30.2 >