On Tue, Apr 19, 2022 at 05:12:36PM +0200, Jason A. Donenfeld wrote: > Hey Alex, > > On Thu, Mar 10, 2022 at 12:18 PM Alexander Graf <graf@xxxxxxxxxx> wrote: > > I agree on the slightly racy compromise and that it's a step into the > > right direction. Doing this is a no brainer IMHO and I like the proc > > based poll approach. > > Alright. I'm going to email a more serious patch for that in the next > few hours and you can have a look. Let's do that for 5.19. > > > I have an additional problem you might have an idea for with the poll > > based path. In addition to the clone notification, I'd need to know at > > which point everyone who was listening to a clone notification is > > finished acting up it. If I spawn a tiny VM to do "work", I want to know > > when it's safe to hand requests into it. How do I find out when that > > point in time is? > > Seems tricky to solve. Even a count of current waiters and a > generation number won't be sufficient, since it wouldn't take into > account users who haven't _yet_ gotten to waiting. But maybe it's not > the right problem to solve? Or somehow not necessary? For example, if > the problem is a bit more constrained a solution becomes easier: you > have a fixed/known set of readers that you know about, and you > guarantee that they're all waiting before the fork. Then after the > fork, they all do something to alert you in their poll()er, and you > count up how many alerts you get until it matches the number of > expected waiters. Would that work? It seems like anything more general > than that is just butting heads with the racy compromise we're already > making. > > Jason I have some ideas here ... but can you explain the use-case a bit more? -- MST