Hi,
I had a query regarding calling synchronize_rcu and synchronize_sched in _cpu_down path :Here is the snippet below :
/*
* By now we've cleared cpu_active_mask, wait for all preempt-disabled
* and RCU users of this state to go away such that all new such users
* will observe it.
*
* For CONFIG_PREEMPT we have preemptible RCU and its sync_rcu() might
* not imply sync_sched(), so explicitly call both.
*
* Do sync before park smpboot threads to take care the rcu boost case.
*/
#ifdef CONFIG_PREEMPT
synchronize_sched();
#endif
synchronize_rcu();
My query is that :
During cpu_down path, we do migrate tasks in this CPU to another CPU
As per LWN article here - http://lwn.net/Articles/253651/ :
For example, suppose that a task calls
rcu_read_lock()
on
one CPU, is preempted, resumes on another CPU, and then calls
rcu_read_unlock()
.
The first CPU's counter will then be +1 and the second CPU's counter
will be -1, however, they will still sum to zero.
Regardless of possible preemption, when the sum of the old counter
elements does go to zero, it is safe to move to the next grace-period
stage, as shown on the right-hand side of the above figure.
From what i understand : Since we migrate the tasks to another CPU, the read side critical sections should get completed from other CPU on which this task is going to run.
Therefore why call synchronize_rcu()/synchronize_sched() and wait in the cpu_down path while there is no write operations happening here in this code?
Therefore why call synchronize_rcu()/synchronize_sched() and wait in the cpu_down path while there is no write operations happening here in this code?
Thanks and regards,
Vignesh Radhakrishnan
_______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies