Re: [PATCH V7 4/9] Add memcg kswapd thread pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, Apr 22, 2011 at 12:46 AM, KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
On Thu, 21 Apr 2011 23:10:58 -0700
Ying Han <yinghan@xxxxxxxxxx> wrote:

> On Thu, Apr 21, 2011 at 10:59 PM, KAMEZAWA Hiroyuki <
> kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
>
> > On Thu, 21 Apr 2011 22:53:19 -0700
> > Ying Han <yinghan@xxxxxxxxxx> wrote:
> >
> > > On Thu, Apr 21, 2011 at 10:00 PM, KAMEZAWA Hiroyuki <
> > > kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:
> > >
> > > > On Thu, 21 Apr 2011 21:49:04 -0700
> > > > Ying Han <yinghan@xxxxxxxxxx> wrote:
> > > >
> > > > > On Thu, Apr 21, 2011 at 9:36 PM, KAMEZAWA Hiroyuki <
> > > > > kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:

> > add a counter for kswapd-scan and kswapd-reclaim, kswapd-pickup will show
> > you information, if necessary it's good to show some latecy stat. I think
> > we can add enough information by adding stats (or debug by perf tools.)
> > I'll consider this a a bit more.
> >
>
> Something like "kswapd_pgscan" and "kswapd_steal" per memcg? If we are going
> to the thread-pool, we definitely need to add more stats to give us enough
> visibility of per-memcg background reclaim activity. Still, not sure about
> the cpu-cycles.
>

BTW, Kosaki requeted me not to have private thread pool implementation and
use workqueue. I think he is right. So, I'd like to write a patch to enhance
workqueue for using it for memcg (Of couse, I'll make a private workqueue.)

Hmm. Can you give a bit more details of the logic behind? and what's about the private workqueue? Also, how
we plan to solve the better debug-ability issue.
  

==
2. regarding to the alternative workqueue, which is more complicated and we
need to be very careful of work items in the workqueue. We've experienced in
one workitem stucks and the rest of the work item won't proceed. For example
in dirty page writeback, one heavily writer cgroup could starve the other
cgroups from flushing dirty pages to the same disk. In the kswapd case, I can
imagine we might have similar senario. How to prioritize the workitems is
another problem. The order of adding the workitems in the queue reflects the
order of cgroups being reclaimed. We don't have that restriction currently but
relying on the cpu scheduler to put kswapd on the right cpu-core to run. We
"might" introduce priority later for reclaim and how are we gonna deal with
that.
==

>From this, I feel I need to use unbound workqueue. BTW, with patches for
current thread pool model, I think starvation problem by dirty pages
cannot be seen.
Anyway, I'll give a try.

Then do you suggest me to wait for your patch for my next post? 

--Ying 

Thanks,
-Kame







[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]