On Tue, 02 Nov 2021 14:35:39 +0800, Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> wrote: > On Sun, 31 Oct 2021 10:46:12 -0400, Michael S. Tsirkin <mst@xxxxxxxxxx> wrote: > > On Thu, Oct 28, 2021 at 06:49:17PM +0800, Xuan Zhuo wrote: > > > In the case of using indirect, indirect desc must be allocated and > > > released each time, which increases a lot of cpu overhead. > > > > > > Here, a cache is added for indirect. If the number of indirect desc to be > > > applied for is less than VIRT_QUEUE_CACHE_DESC_NUM, the desc array with > > > the size of VIRT_QUEUE_CACHE_DESC_NUM is fixed and cached for reuse. > > > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx> > > > > What bothers me here is what happens if cache gets > > filled on one numa node, then used on another? > > Well, this is a good question, I didn't think about it before. > > In this way, I feel that using kmem_cache_alloc's series of functions is a good > solution. But when I tested it before, there was no improvement. I want to study > the reasons for this, and hope to solve this problem. > > In addition, regarding the use of the kmem_cache_alloc_bulk function you > mentioned, I will also try to see it. I'm thinking about another question, how did the cross-numa appear here, and virtio desc queue also has the problem of cross-numa. So is it necessary for us to deal with the cross-numa scene? Indirect desc is used as virtio desc, so as long as it is in the same numa as virito desc. So we can allocate indirect desc cache at the same time when allocating virtio desc queue. Of course, we need to solve the problem that rx does not need indirect desc cache. So it is necessary to set whether to use indirect desc cache based on pre-queue. _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization