On Mon, Dec 25, 2023 at 2:45 PM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > On Mon, Dec 25, 2023 at 2:34 PM Jason Xing <kerneljasonxing@xxxxxxxxx> wrote: > > > > On Mon, Dec 25, 2023 at 12:14 PM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > > > On Mon, Dec 25, 2023 at 10:25 AM Jason Xing <kerneljasonxing@xxxxxxxxx> wrote: > > > > > > > > Hello Jason, > > > > On Fri, Dec 22, 2023 at 10:36 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > > > > > > > On Thu, Dec 21, 2023 at 11:06 PM Willem de Bruijn > > > > > <willemdebruijn.kernel@xxxxxxxxx> wrote: > > > > > > > > > > > > Heng Qi wrote: > > > > > > > > > > > > > > > > > > > > > 在 2023/12/20 下午10:45, Willem de Bruijn 写道: > > > > > > > > Heng Qi wrote: > > > > > > > >> virtio-net has two ways to switch napi_tx: one is through the > > > > > > > >> module parameter, and the other is through coalescing parameter > > > > > > > >> settings (provided that the nic status is down). > > > > > > > >> > > > > > > > >> Sometimes we face performance regression caused by napi_tx, > > > > > > > >> then we need to switch napi_tx when debugging. However, the > > > > > > > >> existing methods are a bit troublesome, such as needing to > > > > > > > >> reload the driver or turn off the network card. > > > > > > > > > > Why is this troublesome? We don't need to turn off the card, it's just > > > > > a toggling of the interface. > > > > > > > > > > This ends up with pretty simple code. > > > > > > > > > > > So try to make > > > > > > > >> this update. > > > > > > > >> > > > > > > > >> Signed-off-by: Heng Qi <hengqi@xxxxxxxxxxxxxxxxx> > > > > > > > >> Reviewed-by: Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> > > > > > > > > The commit does not explain why it is safe to do so. > > > > > > > > > > > > > > virtnet_napi_tx_disable ensures that already scheduled tx napi ends and > > > > > > > no new tx napi will be scheduled. > > > > > > > > > > > > > > Afterwards, if the __netif_tx_lock_bh lock is held, the stack cannot > > > > > > > send the packet. > > > > > > > > > > > > > > Then we can safely toggle the weight to indicate where to clear the buffers. > > > > > > > > > > > > > > > > > > > > > > > The tx-napi weights are not really weights: it is a boolean whether > > > > > > > > napi is used for transmit cleaning, or whether packets are cleaned > > > > > > > > in ndo_start_xmit. > > > > > > > > > > > > > > Right. > > > > > > > > > > > > > > > > > > > > > > > There certainly are some subtle issues with regard to pausing/waking > > > > > > > > queues when switching between modes. > > > > > > > > > > > > > > What are "subtle issues" and if there are any, we find them. > > > > > > > > > > > > A single runtime test is not sufficient to exercise all edge cases. > > > > > > > > > > > > Please don't leave it to reviewers to establish the correctness of a > > > > > > patch. > > > > > > > > > > +1 > > > > > > > > > [...] > > > > > And instead of trying to do this, it would be much better to optimize > > > > > the NAPI performance. Then we can drop the orphan mode. > > > > > > [...] > > > > Do you mean when to call skb_orphan()? If yes, I just want to provide > > > > more information that we also have some performance issues where the > > > > flow control takes a bad effect, especially under some small > > > > throughput in our production environment. > > > > > > I think you need to describe it in detail. > > > > Some of the details were described below in the last email. The > > decreased performance happened because of flow control: the delay of > > skb free means the delay > > What do you mean by delay here? Is it an interrupt delay? If yes, Does > it work better if you simply remove Delay means the interval between start_xmit() to skb free. I counted some numbers and then found out that some of them have very long intervals which might be normal? $ sudo bpftrace txcompletion.bt Attaching 4 probes... trace from virtio start_xmit() to tcp_wfree() longer than 0 ns ^C @average: 384386 @count: 348302 @hist: [1K, 2K) 9551 |@@@@ | [2K, 4K) 116513 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@| [4K, 8K) 88295 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ | [8K, 16K) 39965 |@@@@@@@@@@@@@@@@@ | [16K, 32K) 39584 |@@@@@@@@@@@@@@@@@ | [32K, 64K) 53970 |@@@@@@@@@@@@@@@@@@@@@@@@ | [64K, 128K) 415 | | [128K, 256K) 0 | | [256K, 512K) 0 | | [512K, 1M) 0 | | [1M, 2M) 0 | | [2M, 4M) 0 | | [4M, 8M) 0 | | [8M, 16M) 0 | | [16M, 32M) 0 | | [32M, 64M) 0 | | [64M, 128M) 0 | | [128M, 256M) 0 | | [256M, 512M) 0 | | [512M, 1G) 0 | | [1G, 2G) 0 | | [2G, 4G) 9 | | > > virtqueue_enable_cb_delayed() with virtqueue_enable_cb()? As the > former may delay the interrupt more or less depend on the traffic. Due to the complexity of the production environment, I suspect the interrupt could be delayed on the host. Thanks for the suggestion. > > > of decreasing of sk_wmem_alloc, then it will > > hit the limit of TSQ mechanism, finally causing transmitting slowly in > > the TCP layer. > > TSQ might work better with BQL which virtio-net doesn't have right now. > > > > > > > > > > What strikes me as odd is if I restart the network, then the issue > > > > will go with the wind. I cannot reproduce it in my testing machine. > > > > One more thing: if I force skb_orphan() the current skb in every > > > > start_xmit(), it could also solve the issue but not in a proper way. > > > > After all, it drops the flow control... :S > > > > > > Yes, that's the known issue. > > > > Really? Did you have some numbers or have some discussion links to > > share? I failed to reproduce on my testing machine, probably the short > > rtt is the key/obstacle. > > I basically mean it is a known side effect of skb_orphan() as it might > decrease sk_wmem_alloc too early. Oh, I got it wrong :( > > Thanks > > > > > > @Eric, it seems it still exists. > > > > Thanks, > > Jason > > > > > > > > Thanks > > > > > > > > > > > Thanks, > > > > Jason > > > > > > > > > > > > > > > > > The napi_tx and non-napi code paths differ in how they handle at least > > > > > > the following structures: > > > > > > > > > > > > 1. skb: non-napi orphans these in ndo_start_xmit. Without napi this is > > > > > > needed as delay until the next ndo_start_xmit and thus completion is > > > > > > unbounded. > > > > > > > > > > > > When switching to napi mode, orphaned skbs may now be cleaned by the > > > > > > napi handler. This is indeed safe. > > > > > > > > > > > > When switching from napi to non-napi, the unbound latency resurfaces. > > > > > > It is a small edge case, and I think a potentially acceptable risk, if > > > > > > the user of this knob is aware of the risk. > > > > > > > > > > > > 2. virtqueue callback ("interrupt" masking). The non-napi path enables > > > > > > the interrupt (disables the mask) when available descriptors falls > > > > > > beneath a low watermark, and reenables when it recovers above a high > > > > > > watermark. Napi disables when napi is scheduled, and reenables on > > > > > > napi complete. > > > > > > > > > > > > 3. dev_queue->state (QUEUE_STATE_DRV_XOFF). if the ring falls below > > > > > > a low watermark, the driver stops the stack for queuing more packets. > > > > > > In napi mode, it schedules napi to clean packets. See the calls to > > > > > > netif_xmit_stopped, netif_stop_subqueue, netif_start_subqueue and > > > > > > netif_tx_wake_queue. > > > > > > > > > > > > Some if this can be assumed safe by looking at existing analogous > > > > > > code, such as the queue stop/start in virtnet_tx_resize. > > > > > > > > > > > > But that all virtqueue callback and dev_queue->state transitions are > > > > > > correct when switching between modes at runtime is not trivial to > > > > > > establish, deserves some thought and explanation in the commit > > > > > > message. > > > > > > > > > > Thanks > > > > > > > > > > > > > > > > > > > >