On Fri, Jul 08, 2022 at 03:42:03AM -0700, Saurabh Singh Sengar wrote: > On Wed, Jul 06, 2022 at 11:09:43AM +0000, David Laight wrote: > > From: Praveen Kumar > > > Sent: 06 July 2022 10:15 > > > > > > On 05-07-2022 21:02, Saurabh Sengar wrote: > > > > There can be scenarios where packets in ring buffer are continuously > > > > getting queued from upper layer and dequeued from storvsc interrupt > > > > handler, such scenarios can hold the foreach_vmbus_pkt loop (which is > > > > executing as a tasklet) for a long duration. Theoretically its possible > > > > that this loop executes forever. Add a condition to limit execution of > > > > this tasklet for finite amount of time to avoid such hazardous scenarios. > > > > Does this really make much difference? > > > > I'd guess the tasklet gets immediately rescheduled as soon as > > the upper layer queues another packet? > > > > Or do you get a different 'bug' where it is never woken again > > because the ring is stuck full? > > > > David > > My initial understanding was that staying in a tasklet for "too long" may not be a > good idea, however I was not sure what the "too long" value be, thus we are thinking > to provide this parameter as a configurable sysfs entry. I couldn't find any linux > doc justifying this, so please correct me here if I am mistaken. Staying in tasklet for "too long" is only an issue if you have other imporant work to do. You might be interested in improving fairness/latency of various kinds of workloads vs. storvsc: * different storage devices * storvsc vs. netdevs * storvsc vs. userspace Which one are you trying to address? Or is performance the highest concern? Then you would likely prefer to keep polling as long as possible. > We have also considered the networking drivers NAPI budget feature while deciding > this approach, where softirq exits once the budget is crossed. This budget feature > act as a performance tuning parameter for driver, and also can help with ring buffer > overflow. I believe similar reasons are true for scsi softirq as well. > > NAPI budget Ref : https://wiki.linuxfoundation.org/networking/napi. > > - Saurabh Reading code here https://elixir.bootlin.com/linux/latest/source/drivers/hv/connection.c#L448, it looks like if you restricted storvsc to only process a finite amount of packets per call you would achieve the *budget* effect. You would get called again if there are more packets to consume and there is already a timeout in that function. Having two different timeouts at these 2 levels will have weird interactions. There is also the irq_poll facility that exists for the block layer and serves a similar purpose as NAPI. You would need to switch to using HV_CALL_ISR. Jeremi > > > > > > - > > Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK > > Registration No: 1397386 (Wales)