Re: IsoBUS: huge transfers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/11/18 10:43 AM, Robin van der Gracht wrote:
>>> My (maybe outdated) experiences:
>>>
>>> The whole ETP thing was invented for a single PGN (whose name an number
>>> I forgot, but it's used to transfer 'object pools').
>>> Besides the creation of ETP in the network layer, IsoBus also decided
>>> that the data of multiple equal (same SRC+DST) transfers would just be
>>> glued together, as a form of 'application level fragmentation'
>>>
>>> Making advantage of the partial transfer is hard to deploy.
>>>
>>> I've never seen 112MB transfers, I saw like up to 100kB, which
>>> completes in +/- 5..10 seconds.
>>>
>>> ETP has no broadcast. So, it's up to the sender to decide what to do
>>> when the receiver aborts on resource problems.
>>> Mostly, the receiver is a well-equipped terminal with plenty of RAM.
>>>
>>> Of course, not all tiny nodes support ETP or even TP.
>>>   
>>>>
>>>> If i see it correctly, ETP is good to reduce time needed for handshake on
>>>> each transfer. At what size this advantage do not provide additional
>>>> performance?  
>>
>> Ok, it means we should support different programming models for
>> different transfer sizes.
>> 1. send over socket with sensible buffer size (1-10 MiB)
> 
> That's how we've worked so far right? So, you're planning on adding a
> max cap and divert to another way of data caching/handling to avoid a
> blocking send() call?

Oleksij is working on multiple SKB support per send() call support. With
this enhancement you can send full ETP transfers without changing the wmem.

However, if you want to have a non-blocking send(), we have to copy the
data into the kernel and thus allocate huge amounts of data from the
socket's wmem buffer, which is not possible without changing the
buffer's size.

Once the code is running and you still want to have non blocking send
without changing the wmem size, we have to look at some even more
advance techniques.

That would involve mapping the send() data into the kernel space, create
a copy on write mapping on the data, let the send() return to user
space. If the user space changes the data, it would transparently get a
new page with the same data.

> If so, than the 'sensible buffer size' corrolates with size of wmem.
> I don't know how wmem size is determined but if it is not fixed, than
> making the 'sensible buffer size' fixed might not be a good idea.

wmem size is configured via a sockopt and the maximum is configured via
proc.

But with the current approach, we don't have that sensible buffer size
limitation anymore.

Marc

-- 
Pengutronix e.K.                  | Marc Kleine-Budde           |
Industrial Linux Solutions        | Phone: +49-231-2826-924     |
Vertretung West/Dortmund          | Fax:   +49-5121-206917-5555 |
Amtsgericht Hildesheim, HRA 2686  | http://www.pengutronix.de   |

Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [Automotive Discussions]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [CAN Bus]

  Powered by Linux