Re: AW: AW: [PATCH] can: isotp: omit unintended hrtimer restart on socket release

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 29.08.21 20:28, Sven Schuchmann wrote:
Hello Oliver,
But I found that this patch decreases the performance of ISO-TP Stack.

AFAICS the performance (aka throughput) of the ISO-TP stack is not
touched but the grace period when closing an ISO-TP socket is increased.

I have created two testscripts where one plays the server and the
other one is running a test and measuring the time how long
it takes to transfer an ISO-TP Frame with 1000 Bytes.

Without this patch it takes about 35ms to transfer the frame,
with this patch it takes about 145ms over vcan0.

Anyone an idea on this?

Yes. We now syncronize the removal of data structures to prevent a
use-after-free issue at socket close time.

The synchronize_rcu() call does this job at specific times which leads
to this extended time the close() syscall needs to perform.

understood


bring up a vcan0 interface with:
sudo modprobe vcan
sudo ip link add dev vcan0 type vcan
sudo ifconfig vcan0 up

here are the scripts:
--- isotp_server.sh ---
#!/bin/bash
iface=vcan0
echo "Wait for Messages on $iface"
while true; do
      exec 3< <(isotprecv -s 77E -d 714 -b F -p AA:AA $iface)
      rxpid=$!
      wait $rxpid
      output=$(cat <&3)
      echo "7F 01 11" | isotpsend -s 77E -d 714 -p AA:AA -L 16:8:0 $iface
done

IMO the issue arises with the use of isotpsend and isotprecv.
These tools are intended to get a hands-on impression how the isotp
stack works.

This kind of use in a script leads to the creation and (now delayed)
*removal* of isotp sockets for *each* single PDU transfer.

Maybe I am wrong but I see something different.
e.g. without this patch:
  (000.000240)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
  (000.000261)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
  (000.000496)  canfd0  714   [8]  2C 01 01 01 01 01 01 01

and with this patch:
  (000.000414)  canfd0  714   [8]  2B 01 01 01 01 01 01 01
  (000.000262)  canfd0  77E   [8]  30 0F 00 AA AA AA AA AA
  (000.001536)  canfd0  714   [8]  2C 01 01 01 01 01 01 01


I'm running a 5.14.0-rc7-00011-g6e764bcd1cf7 kernel here and see this:

 (000.000001)  vcan0  714   [8]  2B 01 01 01 01 01 01 01
 (000.000015)  vcan0  77E   [8]  30 0F 00 AA AA AA AA AA
 (000.000005)  vcan0  714   [8]  2C 01 01 01 01 01 01 01

Test iso-tp with 1000 byte frames on vcan0 (data:01)
    1 / curr:   40 / min:   40 / max:   40 / avg:   40.0
    2 / curr:   30 / min:   30 / max:   40 / avg:   35.0
    3 / curr:   35 / min:   30 / max:   40 / avg:   35.0
    4 / curr:   52 / min:   30 / max:   52 / avg:   39.2
    5 / curr:   40 / min:   30 / max:   52 / avg:   39.4
(..)

when running your scripts from the initial post.

Is you canfd0 interface a real hardware?

Best regards,
Oliver

So within one PDU transfer the first Consecutive Frame after
a Flow Control is taking about 10ms longer (the consecutive
frames are sent by ISO-TP Stack here, Tested against a "real" ECU.)
So if I transfer a lot of data within one PDU,
the more Flow Controls I have and the more "delays" after each FC,
and this increases the time for the whole PDU.)


The better approach would be to write a C program that creates ONE
socket and simply read() from that socket and write() to it.

This should boost your performance even more.

Sure, I do have this. These two scripts are only lets say a "reproducer".


Is the performance a real requirement for your use-case or is this
decreased socket close rate a finding which does not really affect your
work?

We have a application here which flashes a ECU over CAN according to VW80126.
So transferring the data as quick as possible to the ECU is a use-case.

Sven




[Index of Archives]     [Automotive Discussions]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [CAN Bus]

  Powered by Linux