On Thu, May 11, 2023 at 10:33:24AM -0700, Axel Rasmussen wrote: > On Thu, May 11, 2023 at 10:18 AM David Matlack <dmatlack@xxxxxxxxxx> wrote: > > > > On Wed, May 10, 2023 at 2:50 PM Peter Xu <peterx@xxxxxxxxxx> wrote: > > > On Tue, May 09, 2023 at 01:52:05PM -0700, Anish Moorthy wrote: > > > > On Sun, May 7, 2023 at 6:23 PM Peter Xu <peterx@xxxxxxxxxx> wrote: > > > > > > What I wanted to do is to understand whether there's still chance to > > > provide a generic solution. I don't know why you have had a bunch of pmu > > > stack showing in the graph, perhaps you forgot to disable some of the perf > > > events when doing the test? Let me know if you figure out why it happened > > > like that (so far I didn't see), but I feel guilty to keep overloading you > > > with such questions. > > > > > > The major problem I had with this series is it's definitely not a clean > > > approach. Say, even if you'll all rely on userapp you'll still need to > > > rely on userfaultfd for kernel traps on corner cases or it just won't work. > > > IIUC that's also the concern from Nadav. > > > > This is a long thread, so apologies if the following has already been discussed. > > > > Would per-tid userfaultfd support be a generic solution? i.e. Allow > > userspace to create a userfaultfd that is tied to a specific task. Any > > userfaults encountered by that task use that fd, rather than the > > process-wide fd. I'm making the assumption here that each of these fds > > would have independent signaling mechanisms/queues and so this would > > solve the scaling problem. > > > > A VMM could use this to create 1 userfaultfd per vCPU and 1 thread per > > vCPU for handling userfault requests. This seems like it'd have > > roughly the same scalability characteristics as the KVM -EFAULT > > approach. > > I think this would work in principle, but it's significantly different > from what exists today. > > The splitting of userfaultfds Peter is describing is splitting up the > HVA address space, not splitting per-thread. [sorry mostly travel last week] No, my idea was actually split per-thread, but since currently there's no way to split per thread I was thinking we should start testing with split per vma so it "emulates" the best we can have out of a split per thread. > > I think for this design, we'd need to change UFFD registration so > multiple UFFDs can register the same VMA, but can be filtered so they > only receive fault events caused by some particular tid(s). Having multiple real uffds per vma is challenging, as you mentioned enqueuing may be more of an effort, meanwhile it's hard to know what's the attribute of the uffd over this vma because each uffd has one feature list. Here what we may need is only the "logical queue" of the uffd. So I was considering supporting multi-queue for a _single_ userfaultfd. I actually mentioned some of it in the very initial reply to Anish: https://lore.kernel.org/all/ZEGuogfbtxPNUq7t@x1n/ If the real problem relies in a bunch of threads queuing, is it possible that we can provide just more queues for the events? The readers will just need to go over all the queues. Way to decide "which thread uses which queue" can be another problem, what comes ups quickly to me is a "hash(tid) % n_queues" but maybe it can be better. Each vcpu thread will have different tids, then they can hopefully scale on the queues. The queues may need to be created also as sub-uffds, each only support partial of the uffd interfaces (read/poll, COPY/CONTINUE/ZEROPAGE) but not all (e.g. UFFDIO_API shouldn't be supported there). > This might also incur some (small?) overhead, because in the fault path > we now need to maintain some data structure so we can lookup which UFFD > to notify based on a combination of the address and our tid. Today, since > VMAs and UFFDs are 1:1 this lookup is trivial. I think it's worth > keeping in mind that a selling point of Anish's approach is that it's a > very small change. It's plausible we can come up with some alternative > way to scale, but it seems to me everything suggested so far is likely to > require a lot more code, complexity, and effort vs. Anish's approach. Yes, I think that's also the reason why I thought I overloaded too much on this work. If Anish eagerly wants that and make it useful, then I'm totally fine because maintaining the 2nd cap seems trivial assuming the maintainer already would accept the 1st cap. I just hope it'll be thoroughly tested with even Google's private userspace hypervisor, so the kernel interface is (even if not straightforward enough to a new user seeing this) solid so it will service the goal for the problem Anish is tackling with. Thanks, -- Peter Xu