Patch "arch_topology: Avoid use-after-free for scale_freq_data" has been added to the 5.13-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    arch_topology: Avoid use-after-free for scale_freq_data

to the 5.13-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     arch_topology-avoid-use-after-free-for-scale_freq_da.patch
and it can be found in the queue-5.13 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit d083cd4cdeaccc6a4ce181ee8db39cecb501a54e
Author: Viresh Kumar <viresh.kumar@xxxxxxxxxx>
Date:   Tue Jun 15 14:27:50 2021 +0530

    arch_topology: Avoid use-after-free for scale_freq_data
    
    [ Upstream commit 83150f5d05f065fb5c12c612f119015cabdcc124 ]
    
    Currently topology_scale_freq_tick() (which gets called from
    scheduler_tick()) may end up using a pointer to "struct
    scale_freq_data", which was previously cleared by
    topology_clear_scale_freq_source(), as there is no protection in place
    here. The users of topology_clear_scale_freq_source() though needs a
    guarantee that the previously cleared scale_freq_data isn't used
    anymore, so they can free the related resources.
    
    Since topology_scale_freq_tick() is called from scheduler tick, we don't
    want to add locking in there. Use the RCU update mechanism instead
    (which is already used by the scheduler's utilization update path) to
    guarantee race free updates here.
    
    synchronize_rcu() makes sure that all RCU critical sections that started
    before it is called, will finish before it returns. And so the callers
    of topology_clear_scale_freq_source() don't need to worry about their
    callback getting called anymore.
    
    Cc: Paul E. McKenney <paulmck@xxxxxxxxxx>
    Fixes: 01e055c120a4 ("arch_topology: Allow multiple entities to provide sched_freq_tick() callback")
    Tested-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
    Reviewed-by: Ionela Voinescu <ionela.voinescu@xxxxxxx>
    Tested-by: Qian Cai <quic_qiancai@xxxxxxxxxxx>
    Signed-off-by: Viresh Kumar <viresh.kumar@xxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
index c1179edc0f3b..921312a8d957 100644
--- a/drivers/base/arch_topology.c
+++ b/drivers/base/arch_topology.c
@@ -18,10 +18,11 @@
 #include <linux/cpumask.h>
 #include <linux/init.h>
 #include <linux/percpu.h>
+#include <linux/rcupdate.h>
 #include <linux/sched.h>
 #include <linux/smp.h>
 
-static DEFINE_PER_CPU(struct scale_freq_data *, sft_data);
+static DEFINE_PER_CPU(struct scale_freq_data __rcu *, sft_data);
 static struct cpumask scale_freq_counters_mask;
 static bool scale_freq_invariant;
 
@@ -66,16 +67,20 @@ void topology_set_scale_freq_source(struct scale_freq_data *data,
 	if (cpumask_empty(&scale_freq_counters_mask))
 		scale_freq_invariant = topology_scale_freq_invariant();
 
+	rcu_read_lock();
+
 	for_each_cpu(cpu, cpus) {
-		sfd = per_cpu(sft_data, cpu);
+		sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu));
 
 		/* Use ARCH provided counters whenever possible */
 		if (!sfd || sfd->source != SCALE_FREQ_SOURCE_ARCH) {
-			per_cpu(sft_data, cpu) = data;
+			rcu_assign_pointer(per_cpu(sft_data, cpu), data);
 			cpumask_set_cpu(cpu, &scale_freq_counters_mask);
 		}
 	}
 
+	rcu_read_unlock();
+
 	update_scale_freq_invariant(true);
 }
 EXPORT_SYMBOL_GPL(topology_set_scale_freq_source);
@@ -86,22 +91,32 @@ void topology_clear_scale_freq_source(enum scale_freq_source source,
 	struct scale_freq_data *sfd;
 	int cpu;
 
+	rcu_read_lock();
+
 	for_each_cpu(cpu, cpus) {
-		sfd = per_cpu(sft_data, cpu);
+		sfd = rcu_dereference(*per_cpu_ptr(&sft_data, cpu));
 
 		if (sfd && sfd->source == source) {
-			per_cpu(sft_data, cpu) = NULL;
+			rcu_assign_pointer(per_cpu(sft_data, cpu), NULL);
 			cpumask_clear_cpu(cpu, &scale_freq_counters_mask);
 		}
 	}
 
+	rcu_read_unlock();
+
+	/*
+	 * Make sure all references to previous sft_data are dropped to avoid
+	 * use-after-free races.
+	 */
+	synchronize_rcu();
+
 	update_scale_freq_invariant(false);
 }
 EXPORT_SYMBOL_GPL(topology_clear_scale_freq_source);
 
 void topology_scale_freq_tick(void)
 {
-	struct scale_freq_data *sfd = *this_cpu_ptr(&sft_data);
+	struct scale_freq_data *sfd = rcu_dereference_sched(*this_cpu_ptr(&sft_data));
 
 	if (sfd)
 		sfd->set_freq_scale();



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux