On Mon, Mar 10, 2025 at 11:36:52PM +0000, Roman Gushchin wrote: > On Mon, Mar 10, 2025 at 04:15:06PM -0700, Shakeel Butt wrote: > > On Mon, Mar 10, 2025 at 03:39:21PM -0700, Andrew Morton wrote: > > > On Mon, 10 Mar 2025 10:23:09 -0700 SeongJae Park <sj@xxxxxxxxxx> wrote: > > > > > > > It is unclear if such use case > > > > is common and the inefficiency is significant. > > > > > > Well, we could conduct a survey, > > > > > > Can you add some logging to detect when userspace performs such an > > > madvise() call, then run that kernel on some "typical" machines which > > > are running "typical" workloads? That should give us a feeling for how > > > often userspace does this, and hence will help us understand the usefulness > > > of this patchset. > > > > Just for the clarification, this patchset is very useful for the > > process_madvise() and the experiment results show that. > > +1 > > Google carried an internal version for a vectorized madvise() which > was much faster than process_madvise() last time I measured it. > I hope SJ's patchset will (partially) address this difference, > which will hopefully allow to drop the internal implementation > for process_madvise. Relatedly I also feel, at some point, we ought to remove the UIO_FASTIOV limit on process_madvise(). But one for a future series...