On Fri, Oct 6, 2023 at 3:35 AM Feng Liu <feliu@xxxxxxxxxx> wrote: > > > > On 2023-07-24 a.m.2:46, Michael S. Tsirkin wrote: > > External email: Use caution opening links or attachments > > > > > > On Fri, Jul 21, 2023 at 10:18:03PM +0200, Maxime Coquelin wrote: > >> > >> > >> On 7/21/23 17:10, Michael S. Tsirkin wrote: > >>> On Fri, Jul 21, 2023 at 04:58:04PM +0200, Maxime Coquelin wrote: > >>>> > >>>> > >>>> On 7/21/23 16:45, Michael S. Tsirkin wrote: > >>>>> On Fri, Jul 21, 2023 at 04:37:00PM +0200, Maxime Coquelin wrote: > >>>>>> > >>>>>> > >>>>>> On 7/20/23 23:02, Michael S. Tsirkin wrote: > >>>>>>> On Thu, Jul 20, 2023 at 01:26:20PM -0700, Shannon Nelson wrote: > >>>>>>>> On 7/20/23 1:38 AM, Jason Wang wrote: > >>>>>>>>> > >>>>>>>>> Adding cond_resched() to the command waiting loop for a better > >>>>>>>>> co-operation with the scheduler. This allows to give CPU a breath to > >>>>>>>>> run other task(workqueue) instead of busy looping when preemption is > >>>>>>>>> not allowed on a device whose CVQ might be slow. > >>>>>>>>> > >>>>>>>>> Signed-off-by: Jason Wang <jasowang@xxxxxxxxxx> > >>>>>>>> > >>>>>>>> This still leaves hung processes, but at least it doesn't pin the CPU any > >>>>>>>> more. Thanks. > >>>>>>>> Reviewed-by: Shannon Nelson <shannon.nelson@xxxxxxx> > >>>>>>>> > >>>>>>> > >>>>>>> I'd like to see a full solution > >>>>>>> 1- block until interrupt > >>>>>> > >>>>>> Would it make sense to also have a timeout? > >>>>>> And when timeout expires, set FAILED bit in device status? > >>>>> > >>>>> virtio spec does not set any limits on the timing of vq > >>>>> processing. > >>>> > >>>> Indeed, but I thought the driver could decide it is too long for it. > >>>> > >>>> The issue is we keep waiting with rtnl locked, it can quickly make the > >>>> system unusable. > >>> > >>> if this is a problem we should find a way not to keep rtnl > >>> locked indefinitely. > >> > >> From the tests I have done, I think it is. With OVS, a reconfiguration is > >> performed when the VDUSE device is added, and when a MLX5 device is > >> in the same bridge, it ends up doing an ioctl() that tries to take the > >> rtnl lock. In this configuration, it is not possible to kill OVS because > >> it is stuck trying to acquire rtnl lock for mlx5 that is held by virtio- > >> net. > > > > So for sure, we can queue up the work and process it later. > > The somewhat tricky part is limiting the memory consumption. > > > > > > > Hi Jason > > Excuse me, is there any plan for when will v5 patch series be sent out? > Will the v5 patches solve the problem of ctrlvq's infinite poll for > buggy devices? We agree to harden VDUSE and, It would be hard if we try to solve it at the virtio-net level, see the discussions before. It might require support from various layers (e.g networking core etc). We can use workqueue etc as a mitigation. If Michael is fine with this, I can post v5. Thanks > > Thanks > Feng > > >>> > >>>>>>> 2- still handle surprise removal correctly by waking in that case > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>>>> --- > >>>>>>>>> drivers/net/virtio_net.c | 4 +++- > >>>>>>>>> 1 file changed, 3 insertions(+), 1 deletion(-) > >>>>>>>>> > >>>>>>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > >>>>>>>>> index 9f3b1d6ac33d..e7533f29b219 100644 > >>>>>>>>> --- a/drivers/net/virtio_net.c > >>>>>>>>> +++ b/drivers/net/virtio_net.c > >>>>>>>>> @@ -2314,8 +2314,10 @@ static bool virtnet_send_command(struct virtnet_info *vi, u8 class, u8 cmd, > >>>>>>>>> * into the hypervisor, so the request should be handled immediately. > >>>>>>>>> */ > >>>>>>>>> while (!virtqueue_get_buf(vi->cvq, &tmp) && > >>>>>>>>> - !virtqueue_is_broken(vi->cvq)) > >>>>>>>>> + !virtqueue_is_broken(vi->cvq)) { > >>>>>>>>> + cond_resched(); > >>>>>>>>> cpu_relax(); > >>>>>>>>> + } > >>>>>>>>> > >>>>>>>>> return vi->ctrl->status == VIRTIO_NET_OK; > >>>>>>>>> } > >>>>>>>>> -- > >>>>>>>>> 2.39.3 > >>>>>>>>> > >>>>>>>>> _______________________________________________ > >>>>>>>>> Virtualization mailing list > >>>>>>>>> Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx > >>>>>>>>> https://lists.linuxfoundation.org/mailman/listinfo/virtualization > >>>>>>> > >>>>> > >>> > > > > _______________________________________________ > > Virtualization mailing list > > Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx > > https://lists.linuxfoundation.org/mailman/listinfo/virtualization > _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization