On 2013/7/12 0:22, Michal Hocko wrote: > On Thu 11-07-13 08:44:08, Tejun Heo wrote: >> Hello, Michal. >> >> On Thu, Jul 11, 2013 at 11:33:00AM +0200, Michal Hocko wrote: >>> +static inline >>> +struct mem_cgroup *vmpressure_to_mem_cgroup(struct vmpressure *vmpr) >>> +{ >>> + return container_of(vmpr, struct mem_cgroup, vmpressure); >>> +} >>> + >>> +void vmpressure_pin_memcg(struct vmpressure *vmpr) >>> +{ >>> + struct mem_cgroup *memcg = vmpressure_to_mem_cgroup(vmpr); >>> + >>> + css_get(&memcg->css); >>> +} >>> + >>> +void vmpressure_unpin_memcg(struct vmpressure *vmpr) >>> +{ >>> + struct mem_cgroup *memcg = vmpressure_to_mem_cgroup(vmpr); >>> + >>> + css_put(&memcg->css); >>> +} >> >> So, while this *should* work, can't we just cancel/flush the work item >> from offline? > > I would rather not put vmpressure clean up code into memcg offlining. > We have reference counting for exactly this purposes so it feels strange > to overcome it like that. I'd agree with Tejun here. Asynchrously should be avoided if not necessary, and the change would be simpler. There's already a vmpressure_init() in mem_cgroup_css_alloc(), so it doesn't seem bad to do vmpressure cleanup in memcg. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>