Re: [RFC 0/3] cpuidle: add poll_source API and virtio vq polling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




在 2021/7/14 上午12:19, Stefan Hajnoczi 写道:
These patches are not polished yet but I would like request feedback on this
approach and share performance results with you.

Idle CPUs tentatively enter a busy wait loop before halting when the cpuidle
haltpoll driver is enabled inside a virtual machine. This reduces wakeup
latency for events that occur soon after the vCPU becomes idle.

This patch series extends the cpuidle busy wait loop with the new poll_source
API so drivers can participate in polling. Such polling-aware drivers disable
their device's irq during the busy wait loop to avoid the cost of interrupts.
This reduces latency further than regular cpuidle haltpoll, which still relies
on irqs.

Virtio drivers are modified to use the poll_source API so all virtio device
types get this feature. The following virtio-blk fio benchmark results show the
improvement:

              IOPS (numjobs=4, iodepth=1, 4 virtqueues)
                before   poll_source      io_poll
4k randread    167102  186049 (+11%)  186654 (+11%)
4k randwrite   162204  181214 (+11%)  181850 (+12%)
4k randrw      159520  177071 (+11%)  177928 (+11%)

The comparison against io_poll shows that cpuidle poll_source achieves
equivalent performance to the block layer's io_poll feature (which I
implemented in a separate patch series [1]).

The advantage of poll_source is that applications do not need to explicitly set
the RWF_HIPRI I/O request flag. The poll_source approach is attractive because
few applications actually use RWF_HIPRI and it takes advantage of CPU cycles we
would have spent in cpuidle haltpoll anyway.

The current series does not improve virtio-net. I haven't investigated deeply,
but it is possible that NAPI and poll_source do not combine. See the final
patch for a starting point on making the two work together.

I have not tried this on bare metal but it might help there too. The cost of
disabling a device's irq must be less than the savings from avoiding irq
handling for this optimization to make sense.

[1] https://lore.kernel.org/linux-block/20210520141305.355961-1-stefanha@xxxxxxxxxx/


Hi Stefan:

Some questions:

1) What's the advantages of introducing polling at virtio level instead of doing it at each subsystems? Polling in virtio level may only work well if all (or most) of the devices are virtio 2) What's the advantages of using cpuidle instead of using a thread (and leverage the scheduler)?
3) Any reason it's virtio_pci specific not a general virtio one?

Thanks

(Btw, do we need to cc scheduler guys?)



Stefan Hajnoczi (3):
   cpuidle: add poll_source API
   virtio: add poll_source virtqueue polling
   softirq: participate in cpuidle polling

  drivers/cpuidle/Makefile           |   1 +
  drivers/virtio/virtio_pci_common.h |   7 ++
  include/linux/interrupt.h          |   2 +
  include/linux/poll_source.h        |  53 +++++++++++++++
  include/linux/virtio.h             |   2 +
  include/linux/virtio_config.h      |   2 +
  drivers/cpuidle/poll_source.c      | 102 +++++++++++++++++++++++++++++
  drivers/cpuidle/poll_state.c       |   6 ++
  drivers/virtio/virtio.c            |  34 ++++++++++
  drivers/virtio/virtio_pci_common.c |  86 ++++++++++++++++++++++++
  drivers/virtio/virtio_pci_modern.c |   2 +
  kernel/softirq.c                   |  14 ++++
  12 files changed, 311 insertions(+)
  create mode 100644 include/linux/poll_source.h
  create mode 100644 drivers/cpuidle/poll_source.c





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux