Hi Sven,
On 30.08.21 09:55, Sven Schuchmann wrote:
but if I compare the candumps I can see:
with the patch:
(000.000008) vcan0 714 [8] 2F 01 01 01 01 01 01 01
(000.000209) vcan0 77E [8] 30 0F 00 AA AA AA AA AA
(000.000061) vcan0 714 [8] 20 01 01 01 01 01 01 01
and without:
(000.000004) vcan0 714 [8] 2F 01 01 01 01 01 01 01
(000.000069) vcan0 77E [8] 30 0F 00 AA AA AA AA AA
(000.000017) vcan0 714 [8] 20 01 01 01 01 01 01 01
sorry, I missed that: Over here the delay seems to be in
the FC and not in the CF after the FC. This is what is
different compared to the real hardware.
So to me it seems that the rcu implementation
has changed on the way from 5.10 to 5.14?
Just checked with a 5.14.0-rc6 which contains the patch, same result:
93 / curr: 143 / min: 129 / max: 200 / avg: 156.2
94 / curr: 144 / min: 129 / max: 200 / avg: 156.0
95 / curr: 141 / min: 129 / max: 200 / avg: 155.9
96 / curr: 171 / min: 129 / max: 200 / avg: 156.0
97 / curr: 138 / min: 129 / max: 200 / avg: 155.8
98 / curr: 137 / min: 129 / max: 200 / avg: 155.6
(000.000011) vcan0 714 [8] 2B 01 01 01 01 01 01 01
(000.000193) vcan0 77E [8] 30 0F 00 AA AA AA AA AA
(000.000037) vcan0 714 [8] 2C 01 01 01 01 01 01 01
So maybe there is something wrong on the rpi?
I see a similar difference on my i7-8650U system:
"5" without and "65" with the patch.
The problem remains to be the added time that is now introduced at
socket close time with the rcu_synchronize().
In your script you are waiting for isotprecv process to finally end with:
wait $rxpid
And that's the expectable effect ...
It looks like the script works fine without the 'wait' code (which does
not wait for the process removal then).
@mkl: I assume we have to live with that increased time at socket close
for security reasons, right?
Best regards,
Oliver
ps. Btw IMO a C program is still the better approach here.
isotp[send|recv] open/close the sockets for each PDU in the given setup :-/