Re: [PATCH] mm/swap: piggyback lru_add_drain_all() calls

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 04/10/2019 15.27, Michal Hocko wrote:
On Fri 04-10-19 05:10:17, Matthew Wilcox wrote:
On Fri, Oct 04, 2019 at 01:11:06PM +0300, Konstantin Khlebnikov wrote:
This is very slow operation. There is no reason to do it again if somebody
else already drained all per-cpu vectors after we waited for lock.
+	seq = raw_read_seqcount_latch(&seqcount);
+
  	mutex_lock(&lock);
+
+	/* Piggyback on drain done by somebody else. */
+	if (__read_seqcount_retry(&seqcount, seq))
+		goto done;
+
+	raw_write_seqcount_latch(&seqcount);
+

Do we really need the seqcount to do this?  Wouldn't a mutex_trylock()
have the same effect?

Yeah, this makes sense. From correctness point of view it should be ok
because no caller can expect that per-cpu pvecs are empty on return.
This might have some runtime effects that some paths might retry more -
e.g. offlining path drains pcp pvces before migrating the range away, if
there are pages still waiting for a worker to drain them then the
migration would fail and we would retry. But this not a correctness
issue.


Caller might expect that pages added by him before are drained.
Exiting after mutex_trylock() will not guarantee that.

For example POSIX_FADV_DONTNEED uses that.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux