Re: [PATCH v31 05/13] mm/damon: Implement primitives for the virtual memory address spaces

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 24, 2021 at 8:21 AM SeongJae Park <sj38.park@xxxxxxxxx> wrote:
>
> From: SeongJae Park <sjpark@xxxxxxxxx>
>
> On Thu, 24 Jun 2021 07:42:44 -0700 Shakeel Butt <shakeelb@xxxxxxxxxx> wrote:
>
> > On Thu, Jun 24, 2021 at 3:26 AM SeongJae Park <sj38.park@xxxxxxxxx> wrote:
> > >
> > [...]
> > > > > +/*
> > > > > + * Get the three regions in the given target (task)
> > > > > + *
> > > > > + * Returns 0 on success, negative error code otherwise.
> > > > > + */
> > > > > +static int damon_va_three_regions(struct damon_target *t,
> > > > > +                               struct damon_addr_range regions[3])
> > > > > +{
> > > > > +       struct mm_struct *mm;
> > > > > +       int rc;
> > > > > +
> > > > > +       mm = damon_get_mm(t);
> > > > > +       if (!mm)
> > > > > +               return -EINVAL;
> > > > > +
> > > > > +       mmap_read_lock(mm);
> > > > > +       rc = __damon_va_three_regions(mm->mmap, regions);
> > > > > +       mmap_read_unlock(mm);
> > > >
> > > > This is being called for each target every second by default. Seems
> > > > too aggressive. Applications don't change their address space every
> > > > second. I would recommend to default ctx->primitive_update_interval to
> > > > a higher default value.
> > >
> > > Good point.  If there are many targets and each target has a huge number of
> > > VMAs, the overhead could be high.  Nevertheless, I couldn't find the overhead
> > > in my test setup.  Also, it seems someone are already started exploring DAMON
> > > patchset with the default value. and usages from others.  Silently changing the
> > > default value could distract such people.  So, if you think it's ok, I'd like
> > > to change the default value only after someone finds the overhead from their
> > > usages and asks a change.
> > >
> > > If you disagree or you found the overhead from your usage, please feel free to
> > > let me know.
> > >
> >
> > mmap lock is a source contention in the real world workloads. We do
> > observe in our fleet and many others (like Facebook) do complain on
> > this issue. This is the whole motivation behind SFP, maple tree and
> > many other mmap lock scalability work. I would be really careful to
> > add another source of contention on mmap lock. Yes, the user can
> > change this interval themselves but we should not burden them with
> > this internal knowledge like "oh if you observe high mmap contention
> > you may want to increase this specific interval". We should set a good
> > default value to avoid such situations (most of the time).
>
> Thank you for this nice clarification.  I can understand your concern because I
> also worked for an HTM-based solution of the scalability issue before.
>
> However, I have neither strong preference nor confidence for the new default
> value at the moment.  Could you please recommend one if you have?
>

I would say go with a conservative value like 60 seconds. Though there
is no scientific reason behind this specific number, I think it would
be a good compromise. Applications usually don't change their address
space layout that often.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux