From: Sebastian Gottschall <s.gottschall@xxxxxxxxxx> > Sent: 25 July 2020 16:42 > >> i agree. i just can say that i tested this patch recently due this > >> discussion here. and it can be changed by sysfs. but it doesnt work for > >> wifi drivers which are mainly using dummy netdev devices. for this i > >> made a small patch to get them working using napi_set_threaded manually > >> hardcoded in the drivers. (see patch bellow) > > By CONFIG_THREADED_NAPI, there is no need to consider what you did here > > in the napi core because device drivers know better and are responsible > > for it before calling napi_schedule(n). > yeah. but that approach will not work for some cases. some stupid > drivers are using locking context in the napi poll function. > in that case the performance will runto shit. i discovered this with the > mvneta eth driver (marvell) and mt76 tx polling (rx works) > for mvneta is will cause very high latencies and packet drops. for mt76 > it causes packet stop. doesnt work simply (on all cases no crashes) > so the threading will only work for drivers which are compatible with > that approach. it cannot be used as drop in replacement from my point of > view. > its all a question of the driver design Why should it make (much) difference whether the napi callbacks (etc) are done in the context of the interrupted process or that of a specific kernel thread. The process flags (or whatever) can even be set so that it appears to be the expected 'softint' context. In any case running NAPI from a thread will just show up the next piece of code that runs for ages in softint context. I think I've seen the tail end of memory being freed under rcu finally happening under softint and taking absolutely ages. David - Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK Registration No: 1397386 (Wales)