Re: [PATCH v2] workingset: ensure memcg is valid for recency check

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 18, 2023 at 11:44:45AM -0700, Yosry Ahmed wrote:
> On Fri, Aug 18, 2023 at 11:35 AM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
> >
> > On Fri, Aug 18, 2023 at 10:45:56AM -0700, Yosry Ahmed wrote:
> > > On Fri, Aug 18, 2023 at 10:35 AM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
> > > > On Fri, Aug 18, 2023 at 07:56:37AM -0700, Yosry Ahmed wrote:
> > > > > If this happens it seems possible for this to happen:
> > > > >
> > > > > cpu #1                                  cpu#2
> > > > >                                              css_put()
> > > > >                                              /* css_free_rwork_fn is queued */
> > > > > rcu_read_lock()
> > > > > mem_cgroup_from_id()
> > > > >                                              mem_cgroup_id_remove()
> > > > > /* access memcg */
> > > >
> > > > I don't quite see how that'd possible. IDR uses rcu_assign_pointer()
> > > > during deletion, which inserts the necessary barriering. My
> > > > understanding is that this should always be safe:
> > > >
> > > >   rcu_read_lock()                 (writer serialization, in this case ref count == 0)
> > > >   foo = idr_find(x)               idr_remove(x)
> > > >   if (foo)                        kfree_rcu(foo)
> > > >     LOAD(foo->bar)
> > > >   rcu_read_unlock()
> > >
> > > How does a barrier inside IDR removal protect against the memcg being
> > > freed here though?
> > >
> > > If css_put() is executed out-of-order before mem_cgroup_id_remove(),
> > > the memcg can be freed even before mem_cgroup_id_remove() is called,
> > > right?
> >
> > css_put() can start earlier, but it's not allowed to reorder the rcu
> > callback that frees past the rcu_assign_pointer() in idr_remove().
> >
> > This is what RCU and its access primitives guarantees. It ensures that
> > after "unpublishing" the pointer, all concurrent RCU-protected
> > accesses to the object have finished, and the memory can be freed.
> 
> I am not sure I understand, this is the scenario I mean:
> 
> cpu#1                      cpu#2                             cpu#3
> css_put()
> /* schedule free */
>                                 rcu_read_lock()
> idr_remove()
>                                mem_cgroup_from_id()
> 
> /* free memcg */
>                                /* use memcg */
> 
> If I understand correctly you are saying that the scheduled free
> callback cannot run before idr_remove() due to the barrier in there,
> but it can run after the rcu_read_lock() in cpu #2 because it was
> scheduled before that RCU critical section started, right?

Isn't there a simpler explanation. The memcg whose id is stored in the
shadow entry has been freed and there is an ongoing new memcg allocation
which by chance has acquired the same id and has not yet initialized
completely. More specifically the new memcg creation is between
css_alloc() and init_and_link_css() and there is a refault for the
shadow entry holding that id.





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux