Re: [PATCH v7 15/18] NTB: Add support for EPF PCI-Express Non-Transparent Bridge

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Arnd,

On 12/11/20 6:54 pm, Arnd Bergmann wrote:
> On Tue, Nov 10, 2020 at 4:42 PM Kishon Vijay Abraham I <kishon@xxxxxx> wrote:
>> On 10/11/20 8:29 pm, Arnd Bergmann wrote:
>>> On Tue, Nov 10, 2020 at 3:20 PM Kishon Vijay Abraham I <kishon@xxxxxx> wrote:
>>>> On 10/11/20 7:55 am, Sherry Sun wrote:
>>>
>>>>> But for VOP, only two boards are needed(one board as host and one board as card) to realize the
>>>>> communication between the two systems, so my question is what are the advantages of using NTB?
>>>>
>>>> NTB is a bridge that facilitates communication between two different
>>>> systems. So it by itself will not be source or sink of any data unlike a
>>>> normal EP to RP system (or the VOP) which will be source or sink of data.
>>>>
>>>>> Because I think the architecture of NTB seems more complicated. Many thanks!
>>>>
>>>> yeah, I think it enables a different use case all together. Consider you
>>>> have two x86 HOST PCs (having RP) and they have to be communicate using
>>>> PCIe. NTB can be used in such cases for the two x86 PCs to communicate
>>>> with each other over PCIe, which wouldn't be possible without NTB.
>>>
>>> I think for VOP, we should have an abstraction that can work on either NTB
>>> or directly on the endpoint framework but provide an interface that then
>>> lets you create logical devices the same way.
>>>
>>> Doing VOP based on NTB plus the new NTB_EPF driver would also
>>> work and just move the abstraction somewhere else, but I guess it
>>> would complicate setting it up for those users that only care about the
>>> simpler endpoint case.
>>
>> I'm not sure if you've got a chance to look at [1], where I added
>> support for RP<->EP system both running Linux, with EP configured using
>> Linux EP framework (as well as HOST ports connected to NTB switch,
>> patches 20 and 21, that uses the Linux NTB framework) to communicate
>> using virtio over PCIe.
>>
>> The cover-letter [1] shows a picture of the two use cases supported in
>> that series.
>>
>> [1] -> http://lore.kernel.org/r/20200702082143.25259-1-kishon@xxxxxx
> 
> No, I missed, that, thanks for pointing me to it!
> 
> This looks very  promising indeed, I need to read up on the whole
> discussion there. I also see your slides at [1]  that help do explain some
> of it. I have one fundamental question that I can't figure out from
> the description, maybe you can help me here:
> 
> How is the configuration managed, taking the EP case as an
> example? Your UseCase1 example sounds like the system that owns
> the EP hardware is the one that turns the EP into a vhost device,
> and creates a vhost-rpmsg device on top, while the RC side would
> probe the pci-vhost and then detect a virtio-rpmsg device to talk to.

That's correct. Slide no 9 in [1] should give the layering details.

> Can it also do the opposite, so you end up with e.g. a virtio-net
> device on the EP side and vhost-net on the RC?

Unfortunately no. Again referring slide 9 in [1], we only have
vhost-pci-epf on the EP side which only creates a "vhost_dev" to deal
with vhost side of things. For doing the opposite, we'd need to create
virtio-pci-epf for EP side that interacts with core virtio (and also the
corresponding vhost back end on PCI host).

Thanks
Kishon

> 
>      Arnd
> 
> [1] https://linuxplumbersconf.org/event/7/contributions/849/attachments/642/1175/Virtio_for_PCIe_RC_EP_NTB.pdf
> 



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux