Re: [HEADSUP] conflicts between cgroup/for-3.12 and memcg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu 08-08-13 20:34:02, Tejun Heo wrote:
> Hello, Stephen, Andrew.
> 
> I just applied rather invasive API update to cgroup/for-3.12, which
> led to conflicts in two files - include/net/netprio_cgroup.h and
> mm/memcontrol.c.  The former is trivial context conflict and the two
> changes conflicting are independent.  The latter contains several
> conflicts and unfortunately isn't trivial, especially the iterator
> update and the memcg patches should probably be rebased.
> 
> I can hold back pushing for-3.12 into for-next until the memcg patches
> are rebased.  Would that work?

I have just tried to merge cgroups/for-3.12 into my memcg tree and there
were some conflicts indeed. They are attached for reference. The
resolving is trivial. I've just picked up HEAD as all the conflicts are
for added resp. removed code in mmotm.

Andrew, let me know if you need a help with rebasing.

HTH
-- 
Michal Hocko
SUSE Labs
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b73988a..c208154 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -182,6 +182,29 @@ struct mem_cgroup_per_node {
 	struct mem_cgroup_per_zone zoneinfo[MAX_NR_ZONES];
 };
 
+<<<<<<< HEAD
+=======
+/*
+ * Cgroups above their limits are maintained in a RB-Tree, independent of
+ * their hierarchy representation
+ */
+
+struct mem_cgroup_tree_per_zone {
+	struct rb_root rb_root;
+	spinlock_t lock;
+};
+
+struct mem_cgroup_tree_per_node {
+	struct mem_cgroup_tree_per_zone rb_tree_per_zone[MAX_NR_ZONES];
+};
+
+struct mem_cgroup_tree {
+	struct mem_cgroup_tree_per_node *rb_tree_per_node[MAX_NUMNODES];
+};
+
+static struct mem_cgroup_tree soft_limit_tree __read_mostly;
+
+>>>>>>> tj-cgroups/for-3.12
 struct mem_cgroup_threshold {
 	struct eventfd_ctx *eventfd;
 	u64 threshold;
@@ -255,7 +278,10 @@ struct mem_cgroup {
 
 	bool		oom_lock;
 	atomic_t	under_oom;
+<<<<<<< HEAD
 	atomic_t	oom_wakeups;
+=======
+>>>>>>> tj-cgroups/for-3.12
 
 	int	swappiness;
 	/* OOM-Killer disable */
@@ -323,6 +349,7 @@ struct mem_cgroup {
 	 */
 	spinlock_t soft_lock;
 
+<<<<<<< HEAD
 	/*
 	 * If true then this group has increased parents' children_in_excess
 	 * when it got over the soft limit.
@@ -334,6 +361,8 @@ struct mem_cgroup {
 	/* Number of children that are in soft limit excess */
 	atomic_t children_in_excess;
 
+=======
+>>>>>>> tj-cgroups/for-3.12
 	struct mem_cgroup_per_node *nodeinfo[0];
 	/* WARNING: nodeinfo must be the last member here */
 };
@@ -3573,9 +3602,15 @@ __memcg_kmem_newpage_charge(gfp_t gfp, struct mem_cgroup **_memcg, int order)
 	 * the page allocator. Therefore, the following sequence when backed by
 	 * the SLUB allocator:
 	 *
+<<<<<<< HEAD
 	 *	memcg_stop_kmem_account();
 	 *	kmalloc(<large_number>)
 	 *	memcg_resume_kmem_account();
+=======
+	 * 	memcg_stop_kmem_account();
+	 * 	kmalloc(<large_number>)
+	 * 	memcg_resume_kmem_account();
+>>>>>>> tj-cgroups/for-3.12
 	 *
 	 * would effectively ignore the fact that we should skip accounting,
 	 * since it will drive us directly to this function without passing

[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]