RE: [RFC 2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control words to netdev dma client

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

> -----Original Message-----
> From: Peter Ujfalusi [mailto:peter.ujfalusi@xxxxxx]
> Sent: Tuesday, April 24, 2018 3:21 PM
> To: Vinod Koul <vinod.koul@xxxxxxxxx>
> Cc: Lars-Peter Clausen <lars@xxxxxxxxxx>; Radhey Shyam Pandey
> <radheys@xxxxxxxxxx>; michal.simek@xxxxxxxxxx; linux-
> kernel@xxxxxxxxxxxxxxx; dmaengine@xxxxxxxxxxxxxxx;
> dan.j.williams@xxxxxxxxx; Appana Durga Kedareswara Rao
> <appanad@xxxxxxxxxx>; linux-arm-kernel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [RFC 2/6] dmaengine: xilinx_dma: Pass AXI4-Stream control words
> to netdev dma client
> 
> On 2018-04-24 06:55, Vinod Koul wrote:
> > On Thu, Apr 19, 2018 at 02:40:26PM +0300, Peter Ujfalusi wrote:
> >>
> >> On 2018-04-18 16:06, Lars-Peter Clausen wrote:
> >>>> Hrm, true, but it is hardly the metadata use case. It is more like
> >>>> different DMA transfer type.
> >>>
> >>> When I look at this with my astronaut architect view from high high up
> above
> >>> I do not see a difference between metadata and multi-planar data.
> >>
> >> I tend to disagree.
> >
> > and we will love to hear more :)
> 
> It is getting pretty off topic from the subject ;) and I'm sorry about that.
> 
> Multi-planar data is _data_, the metadata is
> parameters/commands/information on _how_ to use the data.
> It is more like a replacement or extension of:
> configure peripheral
> send data
> 
> to
> 
> send data with configuration
> 
> In both cases the same data is sent, but the configuration,
> parametrization is 'simplified' to allow per packet changes.
> 
> >>> Both split the data that is sent to the peripheral into multiple
> >>> sub-streams, each carrying part of the data. I'm sure there are peripherals
> >>> that interleave data and metadata on the same data stream. Similar to
> how we
> >>> have left and right channel interleaved in a audio stream.
> >>
> >> Slimbus, S/PDIF?
> >>
> >>> What about metadata that is not contiguous and split into multiple
> segments.
> >>> How do you handle passing a sgl to the metadata interface? And then it
> >>> suddenly looks quite similar to the normal DMA descriptor interface.
> >>
> >> Well, the metadata is for the descriptor. The descriptor describe the
> >> data transfer _and_ can convey additional information. Nothing is
> >> interleaved, the data and the descriptor are different things. It is
> >> more like TCP headers detached from the data (but pointing to it).
> >>
> >>> But maybe that's just one abstraction level to high.
> >>
> >> I understand your point, but at the end the metadata needs to end up in
> >> the descriptor which is describing the data that is going to be moved.
> >>
> >> The descriptor is not sent as a separate DMA trasnfer, it is part of the
> >> DMA transfer, it is handled internally by the DMA.
> >
> > That is bit confusing to me. I thought DMA was transparent to meta data and
> > would blindly collect and transfer along with the descriptor. So at high
> > level we are talking about two transfers (probably co-joined at hip and you
> > want to call one transfer)
> 
> At the end yes, both the descriptor and the data is going to be sent to
> the other end.
> 
> As a reference see [1]
> 
> The metadata is not a separate entity, it is part of the descriptor
> (Host Packet Descriptor - HPD).
> Each transfer (packet) is described with a HPD. The HPD have optional
> fields, like EPIB (Extended Packet Info Block), PSdata (Protocol
> Specific data).
> 
> When the DMA reads the HPD, is going to move the data described by the
> HPD to the entry point (or from the entry point to memory), copies the
> EPIB/PSdata from the HPD to a destination HPD. The other end will use
> the destination HPD to know the size of the data and to get the metadata
> from the descriptor.
> 
> In essence every entity within the Multicore Navigator system have
> pktdma, they all work in a similar way, but their capabilities might
> differ. Our entry to this mesh is via the DMA.
> 
> > but why can't we visualize this as just a DMA
> > transfers. maybe you want to signal/attach to transfer, cant we do that with
> > additional flag DMA_METADATA etc..?
> 
> For the data we need to call dmaengine_prep_slave_* to create the
> descriptor (HPD). The metadata needs to be present in the HPD, hence I
> was thinking of the attach_metadata as per descriptor API.
> 
> If separate dmaengine_prep_slave_* is used for allocating the HPD and
> place the metadata in it then the consequent dmaengine_prep_slave_* call
> must be for the data of the transfer and it is still unclear how the
> prepare call would have any idea where to look for the HPD it needs to
> update with the parameters for the data transfer.
> 
> I guess the driver could store the HPD pointer in the channel data if
> the prepare is called with DMA_METADATA and it would be mandatory that
> the next prepare is for the data portion. The driver would pick the
> pointer to the HPD we stored away and update the descriptor belonging to
> different tx_desc.
> 
> But if we are here, we could have a flag like DMA_DESCRIPTOR and let
> client drivers to allocate the whole descriptor, fill in the metadata
> and give that to the DMA driver, which will update the rest of the HPD.
> 
> Well, let's see where this is going to go when I can send the patches
> for review.
Thanks all. @Peter: If we have metadata patchset ready may be good
to send an RFC?

> 
> [1] http://www.ti.com/lit/ug/sprugr9h/sprugr9h.pdf
> 
> - Péter
> 
> Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
> Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
��.n��������+%������w��{.n��������)�)��jg��������ݢj����G�������j:+v���w�m������w�������h�����٥




[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux PCI]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux