On Mon, Aug 23, 2021 at 10:14 AM Steven Whitehouse <swhiteho@xxxxxxxxxx> wrote: > On Fri, 2021-08-20 at 17:22 +0200, Andreas Gruenbacher wrote: > > On Fri, Aug 20, 2021 at 3:11 PM Bob Peterson <rpeterso@xxxxxxxxxx> > > wrote: > > > > [snip] > > > > > > You can almost think of this as a performance enhancement. This > > > concept > > > allows a process to hold a glock for much longer periods of time, > > > at a > > > lower priority, for example, when gfs2_file_read_iter needs to hold > > > the > > > glock for very long-running iterative reads. > > > > Consider a process that allocates a somewhat large buffer and reads > > into it in chunks that are not page aligned. The buffer initially > > won't be faulted in, so we fault in the first chunk and write into > > it. > > Then, when reading the second chunk, we find that the first page of > > the second chunk is already present. We fill it, set the > > HIF_MAY_DEMOTE flag, fault in more pages, and clear the > > HIF_MAY_DEMOTE > > flag. If we then still have the glock (which is very likely), we > > resume the read. Otherwise, we return a short result. > > > > Thanks, > > Andreas > > > > If the goal here is just to allow the glock to be held for a longer > period of time, but with occasional interruptions to prevent > starvation, then we have a potential model for this. There is > cond_resched_lock() which does this for spin locks. This isn't an appropriate model for what I'm trying to achieve here. In the cond_resched case, we know at the time of the cond_resched call whether or not we want to schedule. If we do, we want to drop the spin lock, schedule, and then re-acquire the spin lock. In the case we're looking at here, we want to fault in user pages. There is no way of knowing beforehand if the glock we're currently holding will have to be dropped to achieve that. In fact, it will almost never have to be dropped. But if it does, we need to drop it straight away to allow the conflicting locking request to succeed. Have a look at how the patch queue uses gfs2_holder_allow_demote() and gfs2_holder_disallow_demote(): https://listman.redhat.com/archives/cluster-devel/2021-August/msg00128.html https://listman.redhat.com/archives/cluster-devel/2021-August/msg00134.html Thanks, Andreas