On Thu, 12 Mar 2020 11:07:59 +0100 SeongJae Park <sjpark@xxxxxxxxxx> wrote: > On Tue, 10 Mar 2020 10:21:34 -0700 Shakeel Butt <shakeelb@xxxxxxxxxx> wrote: > > > On Mon, Feb 24, 2020 at 4:31 AM SeongJae Park <sjpark@xxxxxxxxxx> wrote: > > > > > > From: SeongJae Park <sjpark@xxxxxxxxx> > > > > > > Introduction > > > ============ > > > [...] > > > > I do want to question the actual motivation of the design followed by this work. > > > > With the already present Page Idle Tracking feature in the kernel, I > > can envision that the region sampling and adaptive region adjustments > > can be done in the user space. Due to sampling, the additional > > overhead will be very small and configurable. > > > > Additionally the proposed mechanism has inherent assumption of the > > presence of spatial locality (for virtual memory) in the monitored > > processes which is very workload dependent. > > > > Given that the the same mechanism can be implemented in the user space > > within tolerable overhead and is workload dependent, why it should be > > done in the kernel? What exactly is the advantage of implementing this > > in kernel? > > First of all, DAMON is not for only user space processes, but also for kernel > space core mechanisms. Many of the core mechanisms will be able to use DAMON > for access pattern based optimizations, with light overhead and reasonable > accuracy. > > Implementing DAMON in user space is of course possible, but it will be > inefficient. Using it from kernel space would make no sense, and it would > incur unnecessarily frequent kernel-user context switches, which is very > expensive nowadays. Forgot mentioning about the spatial locality. Yes, it is workload dependant, but still pervasive in many case. Also, many core mechanisms in kernel such as read-ahead or LRU are already using some similar assumptions. If it is so problematic, you could set the maximum number of regions to the number of pages in the system so that each region monitors each page. Thanks, SeongJae Park > > > Thanks, > SeongJae Park > > > > > > thanks, > > Shakeel >