Re: [PATCH rdma-next v2 00/14] index e382a3ca759e..d4a471a76d82 100644

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 1/5/2018 8:59 AM, Or Gerlitz wrote:
> On Thu, Jan 4, 2018 at 11:50 PM, Daniel Jurgens <danielj@xxxxxxxxxxxx> wrote:
>> On 1/4/2018 1:53 PM, Or Gerlitz wrote:
>>> On Thu, Jan 4, 2018 at 5:25 PM, Leon Romanovsky <leon@xxxxxxxxxx> wrote:
>>>>  v1 -> v2:
>>>>   * Dropped "IB/mlx5: Use correct mdev for vport queries in ib_virt"
>>>>  v0 -> v1:
>>>>   * Rebased to latest rdma/for-next
>>>>   * Enriched commit messages.
>>> The V2 post LGTM re the points I was commenting on: the IB virt patch
>>> removed and
>>> the  cover letter + change-log properly elaborate on how the feature
>>> is configured and
>>> what would be the resulted IB devices under the different variations,
>>> see one followup below:
>>>
>>> [..]
>>>
>>>> SR-IOV devices follow the same pattern as the physical ones. VFs of a
>>>> master port can bind VFs of slave ports, if available, and operate as
>>>> dual port devices.
>>>>
>>>> Examples of devices passed to a VM:
>>>> (master)         - One net device, one IB device that has two ports. The slave
>>>>                    port will always be down.
>>>> (slave)          - One net device, no IB devices.
>>>> (slave, slave)   - Two net devices and no IB devices.
>>>> (master, master) - Two net devices, two IB devices, each with two ports.
>>>>                    The slave port of each device will always be down.
>>>> (master, slave)  - Two net devices, one IB device, with two ports. Both
>>>>                     ports can be used.
>>>>
>>>> There are no changes to the existing design for net devices.
>>>>
>>>> The feature is disabled by default and it is enabled in firmware with mlxconfig.
>>> Dan, so what happens w.r.t to the different combinations specified above, if on
>>> SRIOV setup where the admin enabled the feature, VFs land in VMs whose
>>> mlx5 driver doesn't have these changes?
>> In this case it would looks the same as it does today. One IB device per PCI device
>> with one port each. The old driver doesn't know about the new capabilities, so it wouldn't bind the ports.
> sounds reasonable
>
>> Operating this way isn't ideal though.  In DPR mode the FW biases the schedule queue
>> allocation to the master port. Performance could be worse on the VFs that could be slave
>> ports if they are used as normal IB devices.
> So the FW doesn't have any indication that these driver instances are
> unware to what's going
> on? doesn't sounds ideal as you said and as life are. It's uneasy to
> come up with designs
> that would let the different Unmodified/Modified PF/VF combinations
> (U/M M/U) work @ their
> best, but we should aim there, this time it sounds that didn't work
> and we will have to live with
> that unless you can think on a way to get the FW signal that to
> themselves and avoid the bias.

The schedule queue allocation happens during FW boot, even before the PF drivers are loaded.  I don't think there's any way around it in this case. 

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux