Re: [PATCH v16 08/11] secretmem: add memcg accounting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue 26-01-21 10:56:54, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 08:31:42AM +0100, Michal Hocko wrote:
> > On Mon 25-01-21 23:38:17, Mike Rapoport wrote:
> > > On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> > > > On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > > > > From: Mike Rapoport <rppt@xxxxxxxxxxxxx>
> > > > > 
> > > > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > > > when the memory is actually allocated and freed.
> > > > 
> > > > What does this mean?
> > > 
> > > That means that the accounting is updated when secretmem does cma_alloc()
> > > and cma_relase().
> > > 
> > > > What are the lifetime rules?
> > > 
> > > Hmm, what do you mean by lifetime rules?
> > 
> > OK, so let's start by reservation time (mmap time right?) then the
> > instantiation time (faulting in memory). What if the calling process of
> > the former has a different memcg context than the later. E.g. when you
> > send your fd or inherited fd over fork will move to a different memcg.
> > 
> > What about freeing path? E.g. when you punch a hole in the middle of
> > a mapping?
> > 
> > Please make sure to document all this.
>  
> So, does something like this answer your question:
> 
> ---
> The memory cgroup is charged when secremem allocates pages from CMA to
> increase large pages pool during ->fault() processing.

OK so that is when the memory is faulted in. Good that is a standard
model we have. The memcg context of the creator of the secret memory is
not really important. So whoever has created is not charged.

> The pages are uncharged from memory cgroup when they are released back to
> CMA at the time secretme inode is evicted.
> ---

so effectivelly when they are unmapped, right? This is similar to
anonymous memory.

As I've said it would be really great to have this life cycle documented
properly.

> > Please note that this all is a user visible stuff that will become PITA
> > (if possible) to change later on. You should really have strong
> > arguments in your justification here.
> 
> I think that adding a dedicated counter for few 2M areas per container is
> not worth the churn. 

What kind of churn you have in mind? What is the downside?

> When we'll get to the point that secretmem can be used to back the entire
> guest memory we can add a new counter and it does not seem to PITA to me.

What does really prevent a larger use with this implementation?

-- 
Michal Hocko
SUSE Labs



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux