Re: Some -serious- BPF-related litmus tests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 26 May 2020 13:19:36 -0700, Andrii Nakryiko wrote:
> On Tue, May 26, 2020 at 7:02 AM Akira Yokosawa <akiyks@xxxxxxxxx> wrote:
>>
>> On Tue, 26 May 2020 19:50:47 +0900, Akira Yokosawa wrote:
>>> On Mon, 25 May 2020 16:31:05 -0700, Andrii Nakryiko wrote:
>>>> On Mon, May 25, 2020 at 3:01 PM Akira Yokosawa <akiyks@xxxxxxxxx> wrote:
>>>>>
>>> [...]
>>>>> Yes, that should work.
>>>>
>>>> Ok, assigning to zero didn't work (it still complained about
>>>> uninitialized read), but using a separate int *lenFail to assign to
>>>> rLenPtr worked. Curiously, if I used rLenPtr = len1; in error case, it
>>>> actually takes a bit more time to verify.
>>>>
>>>> So I've converted everything else as you suggested. I compiled latest
>>>> herd7 and it doesn't produce any warnings. But it's also extremely
>>>> slow, compared to the herd7 that I get by default. Validating simple
>>>> 1p1c cases takes about 2.5x times longer (0.03s vs 0.07),
>>
>> Wait a moment!
>>
>> This 0.03s was the run time of the original 1p1c litmus test, wasn't it?
>> Then you are comparing apples and oranges.
>>
>> How long does your default herd7 take to complete the updated 1p1c test?
>>
>>         Thanks, Akira
> 
> It could be new test vs old test, so I re-ran again. Identical
> 1p1c-unbound test:
> 
> OLD version:
> 
> $ herd7 -version && herd7 -unroll 0 -conf linux-kernel.cfg
> ../../Documentation/litmus-tests/bpf-rb/bpf-rb+1p1c+unbound.litmus
> 7.52, Rev: exported
> Test bpf-rb+1p1c+unbound Allowed
> States 2
> 0:rFail=0; 1:rFail=0; cx=0; len1=1; px=1;
> 0:rFail=0; 1:rFail=0; cx=1; len1=1; px=1;
> Ok
> Witnesses
> Positive: 3 Negative: 0
> Condition exists (0:rFail=0 /\ 1:rFail=0 /\ px=1 /\ len1=1 /\ (cx=0 \/ cx=1))
> Observation bpf-rb+1p1c+unbound Always 3 0
> Time bpf-rb+1p1c+unbound 0.03
> Hash=20a68cc69b09fbb79f407f825c015623
> 
> LATEST from sources version:
> 
> $ herd7 -version && herd7 -unroll 0 -conf linux-kernel.cfg
> ../../Documentation/litmus-tests/bpf-rb/bpf-rb+1p1c+unbound.litmus
> 7.55+01(dev), Rev: 61e23aaee7bba87ccf4cdf1a620a3a9fa8f9a586
> Test bpf-rb+1p1c+unbound Allowed
> States 2
> 0:rFail=0; 1:rFail=0; cx=0; len1=1; px=1;
> 0:rFail=0; 1:rFail=0; cx=1; len1=1; px=1;
> Ok
> Witnesses
> Positive: 3 Negative: 0
> Condition exists (0:rFail=0 /\ 1:rFail=0 /\ px=1 /\ len1=1 /\ (cx=0 \/ cx=1))
> Observation bpf-rb+1p1c+unbound Always 3 0
> Time bpf-rb+1p1c+unbound 0.06
> Hash=20a68cc69b09fbb79f407f825c015623
> 
> Still 2x difference.

I see opposite tendency on a different set of time consuming
litmus tests comparing herd7 7.52 and HEAD.

                                                herd7 7.52     herd7 HEAD
C-SB+l-o-o-u+l-o-o-u+l-o-o-u+l-o-o-u+l-o-o-u      8.44           6.12
C-SB+l-o-o-u+l-o-o-u+l-o-o-u+l-o-o-u-C           77.19          69.92
C-SB+l-o-o-u+l-o-o-u+l-o-o-u+l-o-o-u-CE         355.62         287.27
C-SB+l-o-o-u+l-o-o-u+l-o-o-u+l-o-o-u-X          157.87         191.50
C-SB+l-o-o-u+l-o-o-u+l-o-o-u+l-o-o-u              2.36           0.94
C-SB+l-o-o-u+l-o-o-u-+l-o-o-u-C                   2.32           0.93
C-SB+l-o-o-u+l-o-o-u-+l-o-o-u-CE                  5.64           3.52
C-SB+l-o-o-u+l-o-o-u+l-o-o-u-X                    3.18           2.52
C-SB+l-o-o-u+l-o-o-u+l-o-o-u-XE                  11.81          10.35
C-SB+l-o-o-u+l-o-o-u+l-o-o-u                      0.25           0.19
C-SB+l-o-o-u+l-o-o-u-C                            0.15           0.12
C-SB+l-o-o-u+l-o-o-u-CE                           0.26           0.20
C-SB+l-o-o-u+l-o-o-u-X                            0.17           0.14
C-SB+l-o-o-u+l-o-o-u-XE                           0.38           0.30
C-SB+l-o-o-u+l-o-o-u                              0.04           0.03

NOTE: These were taken on a fairly old PC, with power-saving mode enabled.

Did you used the original 1p1c unbound test?
I'd like you to compare the updated 1p1c unbound test.

        Thanks, Akira

> 
>>
>>>>                                                           but trying
>>>> to validate 2p1c case, which normally validates in 42s (unbounded) and
>>>> 110s (bounded), it took more than 20 minutes and hasn't finished,
>>>> before I gave up. So I don't know what's going on there...
>>>
>>> herdtools7 has recently been heavily restructured.
>>> On the performance regression, I must defer to Luc.
>>>
>>> Luc, do you have any idea?
>>>
>>>>
>>>> As for klitmus7, I managed to generate everything without warnings,
>>>> but couldn't make it build completely due to:
>>>>
>>>> $ make
>>>> make -C /lib/modules/5.6.13-01802-g938d64da97c6/build/
>>>
>>> So you are on Linux 5.6.x which requires cutting-edge klitmus7.
>>>
>>>> M=/home/andriin/local/linux-trees/tools/memory-model/mymodules modules
>>>> make[1]: Entering directory `/data/users/andriin/linux-build/fb-config'
>>>> make[2]: Entering directory `/data/users/andriin/linux-build/default-x86_64'
>>>>   CC [M]  /home/andriin/local/linux-trees/tools/memory-model/mymodules/litmus000.o
>>>> /home/andriin/local/linux-trees/tools/memory-model/mymodules/litmus000.c:
>>>> In function ‘zyva’:
>>>> /home/andriin/local/linux-trees/tools/memory-model/mymodules/litmus000.c:507:12:
>>>> warning: ISO C90 forbids variable length array ‘th’ [-Wvla]
>>>>      struct task_struct *th[nth];
>>>>             ^~~~~~~~~~~
>>>> /home/andriin/local/linux-trees/tools/memory-model/mymodules/litmus000.c:
>>>> In function ‘litmus_init’:
>>>> /home/andriin/local/linux-trees/tools/memory-model/mymodules/litmus000.c:605:67:
>>>> error: passing argument 4 of ‘proc_create’ from incompatible pointer
>>>> type [-Werror=incompatible-pointer-types]
>>>>    struct proc_dir_entry *litmus_pde =
>>>> proc_create("litmus",0,NULL,&litmus_proc_fops);
>>>>
>>>> ^~~~~~~~~~~~~~~~~
>>>> In file included from
>>>> /home/andriin/local/linux-trees/tools/memory-model/mymodules/litmus000.c:15:
>>>> /data/users/andriin/linux-fb/include/linux/proc_fs.h:64:24: note:
>>>> expected ‘const struct proc_ops *’ but argument is of type ‘const
>>>> struct file_operations *’
>>>>  struct proc_dir_entry *proc_create(const char *name, umode_t mode,
>>>> struct proc_dir_entry *parent, const struct proc_ops *proc_ops);
>>>>                         ^~~~~~~~~~~
>>>> cc1: some warnings being treated as errors
>>>> make[3]: *** [/home/andriin/local/linux-trees/tools/memory-model/mymodules/litmus000.o]
>>>> Error 1
>>>> make[2]: *** [/home/andriin/local/linux-trees/tools/memory-model/mymodules]
>>>> Error 2
>>>> make[2]: Leaving directory `/data/users/andriin/linux-build/default-x86_64'
>>>> make[1]: *** [sub-make] Error 2
>>>> make[1]: Leaving directory `/data/users/andriin/linux-build/fb-config'
>>>> make: *** [all] Error 2
>>>>
>>>
>>> These errors suggest the klitmus7 you used is version 7.52 or some such.
>>> You said you have built herd7 from the source.  Have you also built klitmus7?
>>>
>>> The up-to-date klitmus7 should generate code compatible with Linux 5.6.x.
>>>
>>> Could you try with the latest one?
>>>
>>>         Thanks, Akira
>>>
>>>>
>>>> But at least it doesn't complain about atomic_t anymore. So anyways,
>>>> I'm going to post updated litmus tests separately from BPF ringbuf
>>>> patches, because Documentation/litmus-tests is not yet present in
>>>> bpf-next.
>>>>
>>>>>
>>>>> You can find a basic introduction of klitmus7 in tools/memory-model/README.
>>>>>
>>>>>         Thanks, Akira
>>>>>
>>>>>>
>>>>>>>
>>>>>>> Please note that if you are on Linux 5.6 (or later), you need an up-to-date
>>>>>>> klitmus7 due to a change in kernel API.
>>>>>>>
>>>>>>> Any question is welcome!
>>>>>>>
>>>>>>>         Thanks, Akira
>>>>>>>
>>>>
>>>> [...]
>>>>
>>>
>>





[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux