Re: [RFC PATCH 00/13] x86 User Interrupts support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 13, 2021 at 01:01:19PM -0700, Sohil Mehta wrote:
> User Interrupts Introduction
> ============================
> 
> User Interrupts (Uintr) is a hardware technology that enables delivering
> interrupts directly to user space.
> 
> Today, virtually all communication across privilege boundaries happens by going
> through the kernel. These include signals, pipes, remote procedure calls and
> hardware interrupt based notifications. User interrupts provide the foundation
> for more efficient (low latency and low CPU utilization) versions of these
> common operations by avoiding transitions through the kernel.
> 
> In the User Interrupts hardware architecture, a receiver is always expected to
> be a user space task. However, a user interrupt can be sent by another user
> space task, kernel or an external source (like a device).
> 
> In addition to the general infrastructure to receive user interrupts, this
> series introduces a single source: interrupts from another user task.  These
> are referred to as User IPIs.
> 
> The first implementation of User IPIs will be in the Intel processor code-named
> Sapphire Rapids. Refer Chapter 11 of the Intel Architecture instruction set
> extensions for details of the hardware architecture [1].
> 
> Series-reviewed-by: Tony Luck <tony.luck@xxxxxxxxx>
> 
> Main goals of this RFC
> ======================
> - Introduce this upcoming technology to the community.
> This cover letter includes a hardware architecture summary along with the
> software architecture and kernel design choices. This post is a bit long as a
> result. Hopefully, it helps answer more questions than it creates :) I am also
> planning to talk about User Interrupts next week at the LPC Kernel summit.
> 
> - Discuss potential use cases.
> We are starting to look at actual usages and libraries (like libevent[2] and
> liburing[3]) that can take advantage of this technology. Unfortunately, we
> don't have much to share on this right now. We need some help from the
> community to identify usages that can benefit from this. We would like to make
> sure the proposed APIs work for the eventual consumers.
> 
> - Get early feedback on the software architecture.
> We are hoping to get some feedback on the direction of overall software
> architecture - starting with User IPI, extending it for kernel-to-user
> interrupt notifications and external interrupts in the future. 
> 
> - Discuss some of the main architecture opens.
> There is lot of work that still needs to happen to enable this technology. We
> are looking for some input on future patches that would be of interest. Here
> are some of the big opens that we are looking to resolve.
> * Should Uintr interrupt all blocking system calls like sleep(), read(),
>   poll(), etc? If so, should we implement an SA_RESTART type of mechanism
>   similar to signals? - Refer Blocking for interrupts section below.
> 
> * Should the User Interrupt Target table (UITT) be shared between threads of a
>   multi-threaded application or maybe even across processes? - Refer Sharing
>   the UITT section below.
> 
> Why care about this? - Micro benchmark performance
> ==================================================
> There is a ~9x or higher performance improvement using User IPI over other IPC
> mechanisms for event signaling.
> 
> Below is the average normalized latency for a 1M ping-pong IPC notifications
> with message size=1.
> 
> +------------+-------------------------+
> | IPC type   |   Relative Latency      |
> |            |(normalized to User IPI) |
> +------------+-------------------------+
> | User IPI   |                     1.0 |
> | Signal     |                    14.8 |
> | Eventfd    |                     9.7 |

Is this the bi-directional eventfd benchmark?
https://github.com/intel/uintr-ipc-bench/blob/linux-rfc-v1/source/eventfd/eventfd-bi.c

Two things stand out:

1. The server and client threads are racing on the same eventfd.
   Eventfds aren't bi-directional! The eventfd_wait() function has code
   to write the value back, which is a waste of CPU cycles and hinders
   progress. I've never seen eventfd used this way in real applications.
   Can you use two separate eventfds?

2. The fd is in blocking mode and the task may be descheduled, so we're
   measuring eventfd read/write latency plus scheduler/context-switch
   latency. A fairer comparison against user interrupts would be to busy
   wait on a non-blocking fd so the scheduler/context-switch latency is
   mostly avoided. After all, the uintrfd-bi.c benchmark does this in
   uintrfd_wait():

     // Keep spinning until the interrupt is received
     while (!uintr_received[token]);

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux