On Fri, Nov 12, 2021 at 03:35:29AM +0100, Viresh Kumar wrote: > On 11-11-21, 17:04, Vincent Whitchurch wrote: > > static int virtio_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, > > @@ -141,7 +140,6 @@ static int virtio_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, > > struct virtio_i2c *vi = i2c_get_adapdata(adap); > > struct virtqueue *vq = vi->vq; > > struct virtio_i2c_req *reqs; > > - unsigned long time_left; > > int count; > > > > reqs = kcalloc(num, sizeof(*reqs), GFP_KERNEL); > > @@ -164,11 +162,9 @@ static int virtio_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, > > reinit_completion(&vi->completion); > > virtqueue_kick(vq); > > > > - time_left = wait_for_completion_timeout(&vi->completion, adap->timeout); > > - if (!time_left) > > - dev_err(&adap->dev, "virtio i2c backend timeout.\n"); > > + wait_for_completion(&vi->completion); > > I thought we decided on making this in insanely high value instead ? That wasn't my impression from the previous email thread. Jie was OK with doing it either way, and only disabling the timeout entirely makes sense to me given the risk for memory corruption otherwise. What "insanely high" timeout value do you have in mind and why would it be acceptable to corrupt kernel memory after that time?