On 2019-06-18 14:55, Arnd Bergmann wrote:
On Tue, Jun 18, 2019 at 10:36 PM Johannes Berg
On Tue, 2019-06-18 at 21:59 +0200, Arnd Bergmann wrote:
> From my understanding, the ioctl interface would create the lower
> netdev after talking to the firmware, and then user space would use
> the rmnet interface to create a matching upper-level device for that.
> This is an artifact of the strong separation of ipa and rmnet in the
Huh. But if rmnet has muxing, and IPA supports that, why would you
need multiple lower netdevs?
From my reading of the code, there is always exactly a 1:1 relationship
between an rmnet netdev an an ipa netdev. rmnet does the encapsulation/
decapsulation of the qmap data and forwards it to the ipa netdev,
which then just passes data through between a hardware queue and
There is a n:1 relationship between rmnet and IPA.
rmnet does the de-muxing to multiple netdevs based on the mux id
in the MAP header for RX packets and vice versa.
[side note: on top of that, rmnet also does "aggregation", which may
be a confusing term that only means transferring multiple frames
> ipa definitely has multiple hardware queues, and the Alex'
> driver does implement the data path on those, just not the
> configuration to enable them.
OK, but perhaps you don't actually have enough to use one for each
I'm lacking the terminology here, but what I understood was that
the netdev and queue again map to a session.
> Guessing once more, I suspect the the XON/XOFF flow control
> was a workaround for the fact that rmnet and ipa have separate
> queues. The hardware channel on IPA may fill up, but user space
> talks to rmnet and still add more frames to it because it doesn't
> know IPA is busy.
> Another possible explanation would be that this is actually
> forwarding state from the base station to tell the driver to
> stop sending data over the air.
Yeah, but if you actually have a hardware queue per upper netdev then
you don't really need this - you just stop the netdev queue when the
hardware queue is full, and you have flow control automatically.
So I really don't see any reason to have these messages going back and
forth unless you plan to have multiple sessions muxed on a single
Hardware may flow control specific PDNs (rmnet interfaces) based on QoS
not necessarily only in case of hardware queue full.
Sure, I definitely understand what you mean, and I agree that would
be the right way to do it. All I said is that this is not how it was
in rmnet (this was again my main concern about the rmnet design
after I learned it was required for ipa) ;-)
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project