On Wed, Feb 05, 2020 at 10:19:53AM -0800, Chia-I Wu wrote: > Make sure elemcnt does not exceed the maximum element count in > virtio_gpu_queue_ctrl_sgs. We should improve our error handling or > impose a size limit on execbuffer, which are TODOs. Hmm, virtio supports indirect ring entries, so large execbuffers should not be a problem ... So I've waded through the virtio code. Figured our logic is wrong. Luckily we err on the safe side (waiting for more free entries than we actually need). The patch below should fix that (not tested yet). cheers, Gerd diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index aa25e8781404..535399b3a3ea 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers/gpu/drm/virtio/virtgpu_vq.c @@ -328,7 +328,7 @@ static bool virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, { struct virtqueue *vq = vgdev->ctrlq.vq; bool notify = false; - int ret; + int vqcnt, ret; again: spin_lock(&vgdev->ctrlq.qlock); @@ -341,9 +341,10 @@ static bool virtio_gpu_queue_ctrl_sgs(struct virtio_gpu_device *vgdev, return notify; } - if (vq->num_free < elemcnt) { + vqcnt = virtqueue_use_indirect(vq, elemcnt) ? 1 : elemcnt; + if (vq->num_free < vqcnt) { spin_unlock(&vgdev->ctrlq.qlock); - wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= elemcnt); + wait_event(vgdev->ctrlq.ack_queue, vq->num_free >= vq); goto again; } _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel