Hello Sven,
On 28.08.21 15:20, Sven Schuchmann wrote:
sorry, I'm late for the party :-)
NP ;-)
But I found that this patch decreases the performance of ISO-TP Stack.
AFAICS the performance (aka throughput) of the ISO-TP stack is not
touched but the grace period when closing an ISO-TP socket is increased.
I have created two testscripts where one plays the server and the
other one is running a test and measuring the time how long
it takes to transfer an ISO-TP Frame with 1000 Bytes.
Without this patch it takes about 35ms to transfer the frame,
with this patch it takes about 145ms over vcan0.
Anyone an idea on this?
Yes. We now syncronize the removal of data structures to prevent a
use-after-free issue at socket close time.
The synchronize_rcu() call does this job at specific times which leads
to this extended time the close() syscall needs to perform.
bring up a vcan0 interface with:
sudo modprobe vcan
sudo ip link add dev vcan0 type vcan
sudo ifconfig vcan0 up
here are the scripts:
--- isotp_server.sh ---
#!/bin/bash
iface=vcan0
echo "Wait for Messages on $iface"
while true; do
exec 3< <(isotprecv -s 77E -d 714 -b F -p AA:AA $iface)
rxpid=$!
wait $rxpid
output=$(cat <&3)
echo "7F 01 11" | isotpsend -s 77E -d 714 -p AA:AA -L 16:8:0 $iface
done
IMO the issue arises with the use of isotpsend and isotprecv.
These tools are intended to get a hands-on impression how the isotp
stack works.
This kind of use in a script leads to the creation and (now delayed)
*removal* of isotp sockets for *each* single PDU transfer.
The better approach would be to write a C program that creates ONE
socket and simply read() from that socket and write() to it.
This should boost your performance even more.
Is the performance a real requirement for your use-case or is this
decreased socket close rate a finding which does not really affect your
work?
Best regards,
Oliver