On 04/16/2015 09:31 PM, Chai Wen wrote:
On 04/15/2015 03:37 AM, Chris Metcalf wrote:
+/*
+ * The cpumask is the mask of possible cpus that the watchdog can run
+ * on, not the mask of cpus it is actually running on. This allows the
+ * user to specify a mask that will include cpus that have not yet
+ * been brought online, if desired.
+ */
+int proc_watchdog_cpumask(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+ int err;
+
+ mutex_lock(&watchdog_proc_mutex);
+ err = proc_do_large_bitmap(table, write, buffer, lenp, ppos);
+ if (!err && write) {
+ /* Remove impossible cpus to keep sysctl output cleaner. */
+ cpumask_and(watchdog_cpumask, watchdog_cpumask,
+ cpu_possible_mask);
+
+ if (watchdog_enabled && watchdog_thresh)
If the new mask is same as the current one, then there is no need to go on ?
cpus_equal(watchdog_cpumask, watchdog_cpumask_for_smpboot) or something else ?
It's a minor optimization, though, since the
smpboot_update_cpumask_percpu_thread()
function will do some cpumask calls and realize that nothing has changed and
return without doing anything anyway.
In any case, with Frederic's recent suggstion, we won't have a
watchdog_cpumask_for_smpboot variable exposed anyway.
+ smpboot_update_cpumask_percpu_thread(&watchdog_threads,
+ watchdog_cpumask);
+ }
+ mutex_unlock(&watchdog_proc_mutex);
+ return err;
+}
+
#endif /* CONFIG_SYSCTL */
void __init lockup_detector_init(void)
{
set_sample_period();
+ /* One cpumask is allocated for smpboot to own. */
+ alloc_cpumask_var(&watchdog_cpumask_for_smpboot, GFP_KERNEL);
alloc_cpumask_var could fail?
Good catch; if I get a failure I'll just return early without trying to
start the watchdog, since clearly things are too memory-constrained
to enable that functionality anyway.
Thanks!
--
Chris Metcalf, EZChip Semiconductor
http://www.ezchip.com
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html