Am 25.04.21 um 12:59 schrieb Patrick Menschel: > Am 25.04.21 um 12:11 schrieb Patrick Menschel: >> Hi, >> >> I'm experiencing a socket timeout when receiving an isotp transfer of >> 256 + overhead bytes via a real can interface. The same application runs >> without problems on vcan0. >> >> The output of dmesg. >> >> [ 146.507796] can: isotp protocol >> [ 146.534527] NOHZ tick-stop error: Non-RCU local softirq work is >> pending, handler #08!!! >> [ 146.534672] NOHZ tick-stop error: Non-RCU local softirq work is >> pending, handler #08!!! >> [ 146.534794] NOHZ tick-stop error: Non-RCU local softirq work is >> pending, handler #08!!! >> [ 146.534920] NOHZ tick-stop error: Non-RCU local softirq work is >> pending, handler #08!!! >> [ 146.535044] NOHZ tick-stop error: Non-RCU local softirq work is >> pending, handler #08!!! >> [ 146.535169] NOHZ tick-stop error: Non-RCU local softirq work is >> pending, handler #08!!! >> [ 146.535294] NOHZ tick-stop error: Non-RCU local softirq work is >> pending, handler #08!!! >> [ 146.535418] NOHZ tick-stop error: Non-RCU local softirq work is >> pending, handler #08!!! >> [ 146.535543] NOHZ tick-stop error: Non-RCU local softirq work is >> pending, handler #08!!! >> [ 146.535668] NOHZ tick-stop error: Non-RCU local softirq work is >> pending, handler #08!!! >> [ 194.034609] can-isotp: isotp_tx_timer_handler: can_send_ret -105 >> >> >> The output of candump. >> mcp0 17FC05F4 [8] 11 07 3C 00 00 00 00 00 >> mcp0 17FE05F4 [3] 30 00 00 >> mcp0 17FC05F4 [8] 21 00 42 4F 4F 54 00 00 >> mcp0 17FC05F4 [8] 22 00 00 00 5E 02 00 6A >> mcp0 17FC05F4 [8] 23 00 65 90 C4 BB FC 6C >> mcp0 17FC05F4 [8] 24 7A 68 F2 A2 C0 33 1F >> mcp0 17FC05F4 [8] 25 C8 D5 A7 E1 7C 00 01 >> mcp0 17FC05F4 [8] 26 43 20 01 00 00 00 00 >> mcp0 17FC05F4 [8] 27 00 00 00 00 00 00 00 >> mcp0 17FC05F4 [8] 28 00 00 00 00 00 00 00 >> mcp0 17FC05F4 [8] 29 00 00 00 00 00 00 00 >> mcp0 17FC05F4 [8] 2A 00 00 15 00 00 00 10 >> mcp0 17FC05F4 [8] 2B 00 00 00 02 00 00 00 >> mcp0 17FC05F4 [8] 2C D4 57 45 20 E8 57 45 >> mcp0 17FC05F4 [8] 2D 20 14 58 45 20 1C 58 >> mcp0 17FC05F4 [8] 2E 45 20 24 58 45 20 2C >> >> >> The setup is a pi0w with the seeed can fd board, e.g. mcp2518fd and >> current standard Raspbian. I'm running a (file) server on mcp0, a mock >> client on mcp1 and both are directly connected. >> The communication is technically full duplex with two independent isotp >> channels but the other channel is silent until a transfer is complete. >> >> Is such an issue known ? >> May this happen due to the limited resources of a pi0w ? >> >> Best Regards, >> Patrick Menschel >> > > I just swapped the pi0w against a pi3b and have the same issue. > > [ 241.887464] can: controller area network core > [ 241.887588] NET: Registered protocol family 29 > [ 241.900798] can: isotp protocol > [ 241.906721] NOHZ tick-stop error: Non-RCU local softirq work is > pending, handler #08!!! > [ 241.906770] NOHZ tick-stop error: Non-RCU local softirq work is > pending, handler #08!!! > [ 241.906889] NOHZ tick-stop error: Non-RCU local softirq work is > pending, handler #08!!! > [ 241.907014] NOHZ tick-stop error: Non-RCU local softirq work is > pending, handler #08!!! > [ 241.907139] NOHZ tick-stop error: Non-RCU local softirq work is > pending, handler #08!!! > [ 241.907264] NOHZ tick-stop error: Non-RCU local softirq work is > pending, handler #08!!! > [ 241.907388] NOHZ tick-stop error: Non-RCU local softirq work is > pending, handler #08!!! > [ 241.907513] NOHZ tick-stop error: Non-RCU local softirq work is > pending, handler #08!!! > [ 241.907638] NOHZ tick-stop error: Non-RCU local softirq work is > pending, handler #08!!! > [ 241.907763] NOHZ tick-stop error: Non-RCU local softirq work is > pending, handler #08!!! > [ 275.326539] can-isotp: isotp_tx_timer_handler: can_send_ret -105 > > Is this a problem of the armhf platform? > > Regards, > Patrick > OK, I got it, user fault. sudo ip link set mcp0 txqueuelen 4000 sudo ip link set mcp1 txqueuelen 4000 did the job. Is it possible to add this error message into the "official" documentation or add an error when the queue is full. It is somewhat missleading to get a rx side timeout error instead of a tx side queue full error. Thanks and Best Regards, Patrick