Re: IsoBUS: huge transfers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 30 Nov 2018 09:44:25 +0100
Oleksij Rempel <o.rempel@xxxxxxxxxxxxxx> wrote:

> On Thu, Nov 29, 2018 at 03:19:06PM +0100, Kurt Van Dijck wrote:
> > On do, 29 nov 2018 14:56:07 +0100, Oleksij Rempel wrote:  
> > > Hi all,
> > > 
> > > I work on huge transfer for j1939 stack.
> > > Current stack works as following:
> > > * RX path:
> > > on J1939_(E)TP_CMD_RTS, see if transfer size correct according to the
> > > protocol. Try to allocate, if filed send J1939_XTP_ABORT_RESOURCE
> > > 
> > > * TX path:
> > > - Userspace app calls send(..buf, size,...).
> > > - j1939: Try to allocated skb for the size (for huge transfer it will fail
> > > at some point.)
> > > - send J1939_(E)TP_CMD_RTS
> > > - after J1939_(E)TP_CMD_CTS, pick data out of huge skb and create tiny skbs
> > > suitable for CAN (this part can be probably optimized with skb_fragments)
> > > 
> > > So far I have following questions:
> > > - How IsoBUS devices handle huge transfers? Do all of them able to transfer
> > > 112MiB in one run? If system has no enough resources for one big transfer,
> > > how is it handled?
> > > - how should be handled aborted transfer? 112MiB will take probably some
> > > hours to run. In case of error, even at the end of transfer, socket will
> > > just return error to the user space. In current state, complete
> > > retransmission will be needed, and it makes no real sense...
> > > 
> > > What are your common practices and experience?  
> > 
> > My (maybe outdated) experiences:
> > 
> > The whole ETP thing was invented for a single PGN (whose name an number
> > I forgot, but it's used to transfer 'object pools').
> > Besides the creation of ETP in the network layer, IsoBus also decided
> > that the data of multiple equal (same SRC+DST) transfers would just be
> > glued together, as a form of 'application level fragmentation'
> > 
> > Making advantage of the partial transfer is hard to deploy.
> > 
> > I've never seen 112MB transfers, I saw like up to 100kB, which
> > completes in +/- 5..10 seconds.
> > 
> > ETP has no broadcast. So, it's up to the sender to decide what to do
> > when the receiver aborts on resource problems.
> > Mostly, the receiver is a well-equipped terminal with plenty of RAM.
> > 
> > Of course, not all tiny nodes support ETP or even TP.
> >   
> > > 
> > > If i see it correctly, ETP is good to reduce time needed for handshake on
> > > each transfer. At what size this advantage do not provide additional
> > > performance?  
> 
> Ok, it means we should support different programming models for
> different transfer sizes.
> 1. send over socket with sensible buffer size (1-10 MiB)

That's how we've worked so far right? So, you're planning on adding a
max cap and divert to another way of data caching/handling to avoid a
blocking send() call?

If so, than the 'sensible buffer size' corrolates with size of wmem.
I don't know how wmem size is determined but if it is not fixed, than
making the 'sensible buffer size' fixed might not be a good idea.

> 2. send over memfd? size up to J1939_MAX_ETP_PACKET_SIZE.
> Needs different programming model in user space.
> 3. Add support for SOCK_STREAM, where kernel splits data into sensible
> sized ETP messages. Message boundaries from user space are not
> preserved, like TCP.
> 
> Before I'll start with 1. variant, what would be "sensible buffer size"?
> Is 1 MiB enough?
> 

Regards,

-- 
Robin van der Gracht
Protonic Holland




[Index of Archives]     [Automotive Discussions]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [CAN Bus]

  Powered by Linux