On Fri, Jun 08, 2012 at 11:35:25AM +0800, Jason Wang wrote: > >>> @@ -655,7 +695,17 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) > >>> kfree_skb(skb); > >>> return NETDEV_TX_OK; > >>> } > >>> - virtqueue_kick(vi->svq); > >>> + > >>> + kick = virtqueue_kick_prepare(vi->svq); > >>> + if (unlikely(kick)) > >>> + virtqueue_notify(vi->svq); > >>> + > >>> + u64_stats_update_begin(&stats->syncp); > >>> + if (unlikely(kick)) > >>> + stats->data[VIRTNET_TX_KICKS]++; > >>> + stats->data[VIRTNET_TX_Q_BYTES] += skb->len; > >>> + stats->data[VIRTNET_TX_Q_PACKETS]++; > >is this statistic interesting? > >how about decrementing when we free? > >this way we see how many are pending.. > > > > Currently we didn't have per-vq statistics but per-cpu, so the skb > could be sent by one vcpu and freed by another. > Pehaps another reason to use per-queue satistics. For transmit, it could be done easily as we both send and free skbs under a lock. I'm not sure how acceptable it is to take a lock in get_stats but send a separate patch like this and we'll see what others say. -- MST _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization