"Michael S. Tsirkin" <mst@xxxxxxxxxx> writes: > On Thu, May 29, 2014 at 04:56:45PM +0930, Rusty Russell wrote: >> virtqueue_add() populates the virtqueue descriptor table from the sgs >> given. If it uses an indirect descriptor table, then it puts a single >> descriptor in the descriptor table pointing to the kmalloc'ed indirect >> table where the sg is populated. >> + for (i = 0; i < total_sg; i++) >> + desc[i].next = i+1; >> + return desc; > > Hmm we are doing an extra walk over descriptors here. > This might hurt performance esp for big descriptors. Yes, this needs to be benchmarked; since it's cache hot my gut feel is that it's a NOOP, but on modern machines my gut feel is always wrong. >> + if (vq->indirect && total_sg > 1 && vq->vq.num_free) >> + desc = alloc_indirect(total_sg, gfp); > > else desc = NULL will be a bit clearer won't it? Agreed. >> /* Update free pointer */ >> - vq->free_head = i; >> + if (desc == vq->vring.desc) >> + vq->free_head = i; >> + else >> + vq->free_head = vq->vring.desc[head].next; > > This one is slightly ugly isn't it? Yes, but it avoided another variable, and I was originally aiming at stack conservation. Turns out adding 'bool indirect' adds 32 bytes more stack for gcc 4.6.4 :( virtio_ring: minor neating Before: gcc 4.8.2: virtio_blk: stack used = 408 gcc 4.6.4: virtio_blk: stack used = 432 After: gcc 4.8.2: virtio_blk: stack used = 408 gcc 4.6.4: virtio_blk: stack used = 464 Signed-off-by: Rusty Russell <rusty@xxxxxxxxxxxxxxx> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 3adf5978b92b..7a7849bc26af 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -141,9 +141,10 @@ static inline int virtqueue_add(struct virtqueue *_vq, { struct vring_virtqueue *vq = to_vvq(_vq); struct scatterlist *sg; - struct vring_desc *desc = NULL; + struct vring_desc *desc; unsigned int i, n, avail, uninitialized_var(prev); int head; + bool indirect; START_USE(vq); @@ -176,21 +177,25 @@ static inline int virtqueue_add(struct virtqueue *_vq, * buffers, then go indirect. FIXME: tune this threshold */ if (vq->indirect && total_sg > 1 && vq->vq.num_free) desc = alloc_indirect(total_sg, gfp); + else + desc = NULL; if (desc) { /* Use a single buffer which doesn't continue */ vq->vring.desc[head].flags = VRING_DESC_F_INDIRECT; vq->vring.desc[head].addr = virt_to_phys(desc); - /* avoid kmemleak false positive (tis hidden by virt_to_phys) */ + /* avoid kmemleak false positive (hidden by virt_to_phys) */ kmemleak_ignore(desc); vq->vring.desc[head].len = total_sg * sizeof(struct vring_desc); /* Set up rest to use this indirect table. */ i = 0; total_sg = 1; + indirect = true; } else { desc = vq->vring.desc; i = head; + indirect = false; } if (vq->vq.num_free < total_sg) { @@ -230,10 +235,10 @@ static inline int virtqueue_add(struct virtqueue *_vq, desc[prev].flags &= ~VRING_DESC_F_NEXT; /* Update free pointer */ - if (desc == vq->vring.desc) - vq->free_head = i; - else + if (indirect) vq->free_head = vq->vring.desc[head].next; + else + vq->free_head = i; /* Set token. */ vq->data[head] = data; -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>