[PATCH] Documentation: Update CPU hotplug and move it to core-api

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The current CPU hotplug is outdated. During the update to what we
currently have I rewrote it partly and moved to sphinx format.

Cc: Jonathan Corbet <corbet@xxxxxxx>
Cc: Mauro Carvalho Chehab <mchehab@xxxxxxxxxx>
Cc: Rusty Russell <rusty@xxxxxxxxxxxxxxx>
Cc: Srivatsa Vaddagiri <vatsa@xxxxxxxxxx>
Cc: Ashok Raj <ashok.raj@xxxxxxxxx>
Cc: Joel Schopp <jschopp@xxxxxxxxxxxxxx>
Cc: linux-doc@xxxxxxxxxxxxxxx
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
---
 Documentation/core-api/cpu_hotplug.rst | 372 +++++++++++++++++++++++++++
 Documentation/core-api/index.rst       |   1 +
 Documentation/cpu-hotplug.txt          | 452 ---------------------------------
 3 files changed, 373 insertions(+), 452 deletions(-)
 create mode 100644 Documentation/core-api/cpu_hotplug.rst
 delete mode 100644 Documentation/cpu-hotplug.txt

diff --git a/Documentation/core-api/cpu_hotplug.rst b/Documentation/core-api/cpu_hotplug.rst
new file mode 100644
index 000000000000..4a50ab7817f7
--- /dev/null
+++ b/Documentation/core-api/cpu_hotplug.rst
@@ -0,0 +1,372 @@
+=========================
+CPU hotplug in the Kernel
+=========================
+
+:Date: December, 2016
+:Author: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>,
+          Rusty Russell <rusty@xxxxxxxxxxxxxxx>,
+          Srivatsa Vaddagiri <vatsa@xxxxxxxxxx>,
+          Ashok Raj <ashok.raj@xxxxxxxxx>,
+          Joel Schopp <jschopp@xxxxxxxxxxxxxx>
+
+Introduction
+============
+
+Modern advances in system architectures have introduced advanced error
+reporting and correction capabilities in processors. There are couple OEMS that
+support NUMA hardware which are hot pluggable as well, where physical node
+insertion and removal require support for CPU hotplug.
+
+Such advances require CPUs available to a kernel to be removed either for
+provisioning reasons, or for RAS purposes to keep an offending CPU off
+system execution path. Hence the need for CPU hotplug support in the
+Linux kernel.
+
+A more novel use of CPU-hotplug support is its use today in suspend resume
+support for SMP. Dual-core and HT support makes even a laptop run SMP kernels
+which didn't support these methods.
+
+
+Command Line Switches
+=====================
+``maxcpus=n``
+  Restrict boot time CPUs to *n*. Say if you have fourV CPUs, using
+  ``maxcpus=2`` will only boot two. You can choose to bring the
+  other CPUs later online.
+
+``nr_cpus=n``
+  Restrict the total amount CPUs the kernel will support. If the number
+  supplied here is lower than the number of physically available CPUs than
+  those CPUs can not be brought online later.
+
+``additional_cpus=n``
+  Use this to limit hotpluggable CPUs. This option sets
+  ``cpu_possible_mask = cpu_present_mask + additional_cpus``
+
+  This option is limited to the IA64 architecture.
+
+``possible_cpus=n``
+  This option sets ``possible_cpus`` bits in ``cpu_possible_mask``.
+
+  This option is limited to the X86 and S390 architecture.
+
+``cede_offline={"off","on"}``
+  Use this option to disable/enable putting offlined processors to an extended
+  ``H_CEDE`` state on supported pseries platforms. If nothing is specified,
+  ``cede_offline`` is set to "on".
+
+  This option is limited to the PowerPC architecture.
+
+``cpu0_hotplug``
+  Allow to shutdown CPU0.
+
+  This option is limited to the X86 architecture.
+
+CPU maps
+========
+
+``cpu_possible_mask``
+  Bitmap of possible CPUs that can ever be available in the
+  system. This is used to allocate some boot time memory for per_cpu variables
+  that aren't designed to grow/shrink as CPUs are made available or removed.
+  Once set during boot time discovery phase, the map is static, i.e no bits
+  are added or removed anytime. Trimming it accurately for your system needs
+  upfront can save some boot time memory.
+
+``cpu_online_mask``
+  Bitmap of all CPUs currently online. Its set in ``__cpu_up()``
+  after a CPU is available for kernel scheduling and ready to receive
+  interrupts from devices. Its cleared when a CPU is brought down using
+  ``__cpu_disable()``, before which all OS services including interrupts are
+  migrated to another target CPU.
+
+``cpu_present_mask``
+  Bitmap of CPUs currently present in the system. Not all
+  of them may be online. When physical hotplug is processed by the relevant
+  subsystem (e.g ACPI) can change and new bit either be added or removed
+  from the map depending on the event is hot-add/hot-remove. There are currently
+  no locking rules as of now. Typical usage is to init topology during boot,
+  at which time hotplug is disabled.
+
+You really don't need to manipulate any of the system CPU maps. They should
+be read-only for most use. When setting up per-cpu resources almost always use
+``cpu_possible_mask`` or ``for_each_possible_cpu()`` to iterate. To macro
+``for_each_cpu()`` can be used to iterate over a custom CPU mask.
+
+Never use anything other than ``cpumask_t`` to represent bitmap of CPUs.
+
+
+Using CPU hotplug
+=================
+The kernel option *CONFIG_HOTPLUG_CPU* needs to be enabled. It is currently
+available on multiple architectures including ARM, MIPS, PowerPC and X86. The
+configuration is done via the sysfs interface: ::
+
+ $ ls -lh /sys/devices/system/cpu
+ total 0
+ drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu0
+ drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu1
+ drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu2
+ drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu3
+ drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu4
+ drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu5
+ drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu6
+ drwxr-xr-x  9 root root    0 Dec 21 16:33 cpu7
+ drwxr-xr-x  2 root root    0 Dec 21 16:33 hotplug
+ -r--r--r--  1 root root 4.0K Dec 21 16:33 offline
+ -r--r--r--  1 root root 4.0K Dec 21 16:33 online
+ -r--r--r--  1 root root 4.0K Dec 21 16:33 possible
+ -r--r--r--  1 root root 4.0K Dec 21 16:33 present
+
+The files *offline*, *online*, *possible*, *present* represent the CPU masks.
+Each CPU folder contains an *online* file which controls the logical on (1) and
+off (0) state. To logically shutdown CPU4: ::
+
+ $ echo 0 > /sys/devices/system/cpu/cpu4/online
+  smpboot: CPU 4 is now offline
+
+Once the CPU is shutdown, it will be removed from */proc/interrupts*,
+*/proc/cpuinfo* and should also not be shown visible by the *top* command. To
+bring CPU4 back online: ::
+
+ $ echo 1 > /sys/devices/system/cpu/cpu4/online
+ smpboot: Booting Node 0 Processor 4 APIC 0x1
+
+The CPU is usable again. This should work on all CPUs. CPU0 is often special
+and excluded from CPU hotplug. On X86 the kernel option
+*CONFIG_BOOTPARAM_HOTPLUG_CPU0* has to be enabled in order to be able to
+shutdown CPU0. Alternatively the kernel command option *cpu0_hotplug* can be
+used. Some known dependencies of CPU0:
+
+* Resume from hibernate/suspend. Hibernate/suspend will fail if CPU0 is offline.
+* PIC interrupts. CPU0 can't be removed if a PIC interrupt is detected.
+
+Please let Fenghua Yu <fenghua.yu@xxxxxxxxx> know if you find any dependencies
+on CPU0.
+
+The CPU hotplug coordination
+============================
+
+The offline case
+----------------
+Once a CPU has been logically shutdown the teardown callbacks of registered
+hotplug states will be invoked, starting with ``CPUHP_ONLINE`` and terminating
+at state ``CPUHP_OFFLINE``. This includes:
+
+* If tasks are frozen due to a suspend operation then *cpuhp_tasks_frozen*
+  will be set to true.
+* All processes are migrated away from this outgoing CPU to new CPUs.
+  The new CPU is chosen from each process' current cpuset, which may be
+  a subset of all online CPUs.
+* All interrupts targeted to this CPU are migrated to a new CPU
+* timers are also migrated to a new CPU
+* Once all services are migrated, kernel calls an arch specific routine
+  ``__cpu_disable()`` to perform arch specific cleanup.
+
+Using the hotplug API
+---------------------
+It is possible to receive notifications once a CPU is offline or onlined. This
+might be important to certain drivers which need to perform some kind of setup
+or clean up functions based on the number of available CPUs: ::
+
+  #include <linux/cpuhotplug.h>
+
+  ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "X/Y:online",
+                          Y_online, Y_prepare_down);
+
+*X* is the subsystem and *Y* the particular driver. The *Y_online* callback
+will be invoked during registration on all online CPUs. If an error
+occurs during the online callback the *Y_prepare_down* callback will be
+invoked on all CPUs on which the online callback was previously invoked.
+After registration completed, the *Y_online* callback will be invoked
+once a CPU is brought online and *Y_prepare_down* will be invoked when a
+CPU is shutdown. All resources which were previously allocated in
+*Y_online* should be released in *Y_prepare_down*.
+The return value *ret* is negative if an error occurred during the
+registration process. Otherwise a positive value is returned which
+contains the allocated hotplug for dynamically allocated states
+(*CPUHP_AP_ONLINE_DYN*). It will return zero for predefined states.
+
+The callback can be remove by invoking ``cpuhp_remove_state()``. In case of a
+dynamically allocated state (*CPUHP_AP_ONLINE_DYN*) use the returned state.
+During the removal of a hotplug state the teardown callback will be invoked.
+
+Multiple instances
+~~~~~~~~~~~~~~~~~~
+If a driver has multiple instances and each instance needs to perform the
+callback independently then it is likely that a ''multi-state'' should be used.
+First a multi-state state needs to be registered: ::
+
+  ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "X/Y:online,
+                                Y_online, Y_prepare_down);
+  Y_hp_online = ret;
+
+The ``cpuhp_setup_state_multi()`` behaves similar to ``cpuhp_setup_state()``
+except it prepares the callbacks for a multi state and does not invoke
+the callbacks. This is a one time setup.
+Once a new instance is allocated, you need to register this new instance: ::
+
+  ret = cpuhp_state_add_instance(Y_hp_online, &d->node);
+
+This function will add this instance to your previously allocated
+*Y_hp_online* state and invoke the previously registered callback
+(*Y_online*) on all online CPUs. The *node* element is a ``struct
+hlist_node`` member of your per-instance data structure.
+
+On removal of the instance: ::
+  cpuhp_state_remove_instance(Y_hp_online, &d->node)
+
+should be invoked which will invoke the teardown callback on all online
+CPUs.
+
+Manual setup
+~~~~~~~~~~~~
+Usually it is handy to invoke setup and teardown callbacks on registration or
+removal of a state because usually the operation needs to performed once a CPU
+goes online (offline) and during initial setup (shutdown) of the driver. However
+each registration and removal function is also available with a ``_nocalls``
+suffix which does not invoke the provided callbacks if the invocation of the
+callbacks is not desired. During the manual setup (or teardown) the functions
+``get_online_cpus()`` and ``put_online_cpus()`` should be used to inhibit CPU
+hotplug operations.
+
+
+The ordering of the events
+--------------------------
+The hotplug states are defined in ``include/linux/cpuhotplug.h``:
+
+* The states *CPUHP_OFFLINE* … *CPUHP_AP_OFFLINE* are invoked before the
+  CPU is up.
+* The states *CPUHP_AP_OFFLINE* … *CPUHP_AP_ONLINE* are invoked
+  just the after the CPU has been brought up. The interrupts are off and
+  the scheduler is not yet active on this CPU. Starting with *CPUHP_AP_OFFLINE*
+  the callbacks are invoked on the target CPU.
+* The states between *CPUHP_AP_ONLINE_DYN* and *CPUHP_AP_ONLINE_DYN_END* are
+  reserved for the dynamic allocation.
+* The states are invoked in the reverse order on CPU shutdown starting with
+  *CPUHP_ONLINE* and stopping at *CPUHP_OFFLINE*. Here the callbacks are
+  invoked on the CPU that will be shutdown until *CPUHP_AP_OFFLINE*.
+
+A dynamically allocated state via *CPUHP_AP_ONLINE_DYN* is often enough.
+However if an earlier invocation during the bring up or shutdown is required
+then an explicit state should be acquired. An explicit state might also be
+required if the hotplug event requires specific ordering in respect to
+another hotplug event.
+
+Testing of hotplug states
+=========================
+One way to verify whether a custom state is working as expected or not is to
+shutdown a CPU and then put it online again. It is also possible to put the CPU
+to certain state (for instance *CPUHP_AP_ONLINE*) and then go back to
+*CPUHP_ONLINE*. This would simulate an error one state after *CPUHP_AP_ONLINE*
+which would lead to rollback to the online state.
+
+All registered states are enumerated in ``/sys/devices/system/cpu/hotplug/states``: ::
+
+ $ tail /sys/devices/system/cpu/hotplug/states
+ 138: mm/vmscan:online
+ 139: mm/vmstat:online
+ 140: lib/percpu_cnt:online
+ 141: acpi/cpu-drv:online
+ 142: base/cacheinfo:online
+ 143: virtio/net:online
+ 144: x86/mce:online
+ 145: printk:online
+ 168: sched:active
+ 169: online
+
+To rollback CPU4 to ``lib/percpu_cnt:online`` and back online just issue: ::
+
+  $ cat /sys/devices/system/cpu/cpu4/hotplug/state
+  169
+  $ echo 140 > /sys/devices/system/cpu/cpu4/hotplug/target
+  $ cat /sys/devices/system/cpu/cpu4/hotplug/state
+  140
+
+It is important to note that the teardown callbac of state 140 have been
+invoked. And now get back online: ::
+
+  $ echo 169 > /sys/devices/system/cpu/cpu4/hotplug/target
+  $ cat /sys/devices/system/cpu/cpu4/hotplug/state
+  169
+
+With trace events enabled, the individual steps are visible, too: ::
+
+  #  TASK-PID   CPU#    TIMESTAMP  FUNCTION
+  #     | |       |        |         |
+      bash-394  [001]  22.976: cpuhp_enter: cpu: 0004 target: 140 step: 169 (cpuhp_kick_ap_work)
+   cpuhp/4-31   [004]  22.977: cpuhp_enter: cpu: 0004 target: 140 step: 168 (sched_cpu_deactivate)
+   cpuhp/4-31   [004]  22.990: cpuhp_exit:  cpu: 0004  state: 168 step: 168 ret: 0
+   cpuhp/4-31   [004]  22.991: cpuhp_enter: cpu: 0004 target: 140 step: 144 (mce_cpu_pre_down)
+   cpuhp/4-31   [004]  22.992: cpuhp_exit:  cpu: 0004  state: 144 step: 144 ret: 0
+   cpuhp/4-31   [004]  22.993: cpuhp_multi_enter: cpu: 0004 target: 140 step: 143 (virtnet_cpu_down_prep)
+   cpuhp/4-31   [004]  22.994: cpuhp_exit:  cpu: 0004  state: 143 step: 143 ret: 0
+   cpuhp/4-31   [004]  22.995: cpuhp_enter: cpu: 0004 target: 140 step: 142 (cacheinfo_cpu_pre_down)
+   cpuhp/4-31   [004]  22.996: cpuhp_exit:  cpu: 0004  state: 142 step: 142 ret: 0
+      bash-394  [001]  22.997: cpuhp_exit:  cpu: 0004  state: 140 step: 169 ret: 0
+      bash-394  [005]  95.540: cpuhp_enter: cpu: 0004 target: 169 step: 140 (cpuhp_kick_ap_work)
+   cpuhp/4-31   [004]  95.541: cpuhp_enter: cpu: 0004 target: 169 step: 141 (acpi_soft_cpu_online)
+   cpuhp/4-31   [004]  95.542: cpuhp_exit:  cpu: 0004  state: 141 step: 141 ret: 0
+   cpuhp/4-31   [004]  95.543: cpuhp_enter: cpu: 0004 target: 169 step: 142 (cacheinfo_cpu_online)
+   cpuhp/4-31   [004]  95.544: cpuhp_exit:  cpu: 0004  state: 142 step: 142 ret: 0
+   cpuhp/4-31   [004]  95.545: cpuhp_multi_enter: cpu: 0004 target: 169 step: 143 (virtnet_cpu_online)
+   cpuhp/4-31   [004]  95.546: cpuhp_exit:  cpu: 0004  state: 143 step: 143 ret: 0
+   cpuhp/4-31   [004]  95.547: cpuhp_enter: cpu: 0004 target: 169 step: 144 (mce_cpu_online)
+   cpuhp/4-31   [004]  95.548: cpuhp_exit:  cpu: 0004  state: 144 step: 144 ret: 0
+   cpuhp/4-31   [004]  95.549: cpuhp_enter: cpu: 0004 target: 169 step: 145 (console_cpu_notify)
+   cpuhp/4-31   [004]  95.550: cpuhp_exit:  cpu: 0004  state: 145 step: 145 ret: 0
+   cpuhp/4-31   [004]  95.551: cpuhp_enter: cpu: 0004 target: 169 step: 168 (sched_cpu_activate)
+   cpuhp/4-31   [004]  95.552: cpuhp_exit:  cpu: 0004  state: 168 step: 168 ret: 0
+      bash-394  [005]  95.553: cpuhp_exit:  cpu: 0004  state: 169 step: 140 ret: 0
+
+As it an be seen, CPU4 went down until timestamp 22.996 and then back up until
+95.552. All invoked callbacks including their return codes are visible in the
+trace.
+
+Architecture's requirements
+===========================
+The following functions and configurations are required:
+
+``CONFIG_HOTPLUG_CPU``
+  This entry needs to be enabled in Kconfig
+
+``__cpu_up()``
+  Arch interface to bring up a CPU
+
+``__cpu_disable()``
+  Arch interface to shutdown a CPU, no more interrupts can be handled by the
+  kernel after the routine returns. This includes the shutdown of the timer.
+
+``__cpu_die()``
+  This actually supposed to ensure death of the CPU. Actually look at some
+  example code in other arch that implement CPU hotplug. The processor is taken
+  down from the ``idle()`` loop for that specific architecture. ``__cpu_die()``
+  typically waits for some per_cpu state to be set, to ensure the processor dead
+  routine is called to be sure positively.
+
+User Space Notification
+=======================
+After CPU successfully onlined or offline udev events are sent. A udev rule like: ::
+
+  SUBSYSTEM=="cpu", DRIVERS=="processor", DEVPATH=="/devices/system/cpu/*", RUN+="the_hotplug_receiver.sh"
+
+will receive all events. A script like: ::
+
+  #!/bin/sh
+
+  if [ "${ACTION}" = "offline" ]
+  then
+      echo "CPU ${DEVPATH##*/} offline"
+
+  elif [ "${ACTION}" = "online" ]
+  then
+      echo "CPU ${DEVPATH##*/} online"
+
+  fi
+
+can process the event further.
+
+Kernel Inline Documentations Reference
+======================================
+
+.. kernel-doc:: include/linux/cpuhotplug.h
diff --git a/Documentation/core-api/index.rst b/Documentation/core-api/index.rst
index 2872ca1a52f1..0d93d8089136 100644
--- a/Documentation/core-api/index.rst
+++ b/Documentation/core-api/index.rst
@@ -13,6 +13,7 @@ Core utilities
 
    assoc_array
    atomic_ops
+   cpu_hotplug
    local_ops
    workqueue
 
diff --git a/Documentation/cpu-hotplug.txt b/Documentation/cpu-hotplug.txt
deleted file mode 100644
index d02e8a451872..000000000000
--- a/Documentation/cpu-hotplug.txt
+++ /dev/null
@@ -1,452 +0,0 @@
-		CPU hotplug Support in Linux(tm) Kernel
-
-		Maintainers:
-		CPU Hotplug Core:
-			Rusty Russell <rusty@xxxxxxxxxxxxxxx>
-			Srivatsa Vaddagiri <vatsa@xxxxxxxxxx>
-		i386:
-			Zwane Mwaikambo <zwanem@xxxxxxxxx>
-		ppc64:
-			Nathan Lynch <nathanl@xxxxxxxxxxxxxx>
-			Joel Schopp <jschopp@xxxxxxxxxxxxxx>
-		ia64/x86_64:
-			Ashok Raj <ashok.raj@xxxxxxxxx>
-		s390:
-			Heiko Carstens <heiko.carstens@xxxxxxxxxx>
-
-Authors: Ashok Raj <ashok.raj@xxxxxxxxx>
-Lots of feedback: Nathan Lynch <nathanl@xxxxxxxxxxxxxx>,
-	     Joel Schopp <jschopp@xxxxxxxxxxxxxx>
-
-Introduction
-
-Modern advances in system architectures have introduced advanced error
-reporting and correction capabilities in processors. CPU architectures permit
-partitioning support, where compute resources of a single CPU could be made
-available to virtual machine environments. There are couple OEMS that
-support NUMA hardware which are hot pluggable as well, where physical
-node insertion and removal require support for CPU hotplug.
-
-Such advances require CPUs available to a kernel to be removed either for
-provisioning reasons, or for RAS purposes to keep an offending CPU off
-system execution path. Hence the need for CPU hotplug support in the
-Linux kernel.
-
-A more novel use of CPU-hotplug support is its use today in suspend
-resume support for SMP. Dual-core and HT support makes even
-a laptop run SMP kernels which didn't support these methods. SMP support
-for suspend/resume is a work in progress.
-
-General Stuff about CPU Hotplug
---------------------------------
-
-Command Line Switches
----------------------
-maxcpus=n    Restrict boot time cpus to n. Say if you have 4 cpus, using
-             maxcpus=2 will only boot 2. You can choose to bring the
-             other cpus later online, read FAQ's for more info.
-
-additional_cpus=n (*)	Use this to limit hotpluggable cpus. This option sets
-  			cpu_possible_mask = cpu_present_mask + additional_cpus
-
-cede_offline={"off","on"}  Use this option to disable/enable putting offlined
-		            processors to an extended H_CEDE state on
-			    supported pseries platforms.
-			    If nothing is specified,
-			    cede_offline is set to "on".
-
-(*) Option valid only for following architectures
-- ia64
-
-ia64 uses the number of disabled local apics in ACPI tables MADT to
-determine the number of potentially hot-pluggable cpus. The implementation
-should only rely on this to count the # of cpus, but *MUST* not rely
-on the apicid values in those tables for disabled apics. In the event
-BIOS doesn't mark such hot-pluggable cpus as disabled entries, one could
-use this parameter "additional_cpus=x" to represent those cpus in the
-cpu_possible_mask.
-
-possible_cpus=n		[s390,x86_64] use this to set hotpluggable cpus.
-			This option sets possible_cpus bits in
-			cpu_possible_mask. Thus keeping the numbers of bits set
-			constant even if the machine gets rebooted.
-
-CPU maps and such
------------------
-[More on cpumaps and primitive to manipulate, please check
-include/linux/cpumask.h that has more descriptive text.]
-
-cpu_possible_mask: Bitmap of possible CPUs that can ever be available in the
-system. This is used to allocate some boot time memory for per_cpu variables
-that aren't designed to grow/shrink as CPUs are made available or removed.
-Once set during boot time discovery phase, the map is static, i.e no bits
-are added or removed anytime.  Trimming it accurately for your system needs
-upfront can save some boot time memory. See below for how we use heuristics
-in x86_64 case to keep this under check.
-
-cpu_online_mask: Bitmap of all CPUs currently online. It's set in __cpu_up()
-after a CPU is available for kernel scheduling and ready to receive
-interrupts from devices. It's cleared when a CPU is brought down using
-__cpu_disable(), before which all OS services including interrupts are
-migrated to another target CPU.
-
-cpu_present_mask: Bitmap of CPUs currently present in the system. Not all
-of them may be online. When physical hotplug is processed by the relevant
-subsystem (e.g ACPI) can change and new bit either be added or removed
-from the map depending on the event is hot-add/hot-remove. There are currently
-no locking rules as of now. Typical usage is to init topology during boot,
-at which time hotplug is disabled.
-
-You really dont need to manipulate any of the system cpu maps. They should
-be read-only for most use. When setting up per-cpu resources almost always use
-cpu_possible_mask/for_each_possible_cpu() to iterate.
-
-Never use anything other than cpumask_t to represent bitmap of CPUs.
-
-	#include <linux/cpumask.h>
-
-	for_each_possible_cpu     - Iterate over cpu_possible_mask
-	for_each_online_cpu       - Iterate over cpu_online_mask
-	for_each_present_cpu      - Iterate over cpu_present_mask
-	for_each_cpu(x,mask)      - Iterate over some random collection of cpu mask.
-
-	#include <linux/cpu.h>
-	get_online_cpus() and put_online_cpus():
-
-The above calls are used to inhibit cpu hotplug operations. While the
-cpu_hotplug.refcount is non zero, the cpu_online_mask will not change.
-If you merely need to avoid cpus going away, you could also use
-preempt_disable() and preempt_enable() for those sections.
-Just remember the critical section cannot call any
-function that can sleep or schedule this process away. The preempt_disable()
-will work as long as stop_machine_run() is used to take a cpu down.
-
-CPU Hotplug - Frequently Asked Questions.
-
-Q: How to enable my kernel to support CPU hotplug?
-A: When doing make defconfig, Enable CPU hotplug support
-
-   "Processor type and Features" -> Support for Hotpluggable CPUs
-
-Make sure that you have CONFIG_SMP turned on as well.
-
-You would need to enable CONFIG_HOTPLUG_CPU for SMP suspend/resume support
-as well.
-
-Q: What architectures support CPU hotplug?
-A: As of 2.6.14, the following architectures support CPU hotplug.
-
-i386 (Intel), ppc, ppc64, parisc, s390, ia64 and x86_64
-
-Q: How to test if hotplug is supported on the newly built kernel?
-A: You should now notice an entry in sysfs.
-
-Check if sysfs is mounted, using the "mount" command. You should notice
-an entry as shown below in the output.
-
-	....
-	none on /sys type sysfs (rw)
-	....
-
-If this is not mounted, do the following.
-
-	#mkdir /sys
-	#mount -t sysfs sys /sys
-
-Now you should see entries for all present cpu, the following is an example
-in a 8-way system.
-
-	#pwd
-	#/sys/devices/system/cpu
-	#ls -l
-	total 0
-	drwxr-xr-x  10 root root 0 Sep 19 07:44 .
-	drwxr-xr-x  13 root root 0 Sep 19 07:45 ..
-	drwxr-xr-x   3 root root 0 Sep 19 07:44 cpu0
-	drwxr-xr-x   3 root root 0 Sep 19 07:44 cpu1
-	drwxr-xr-x   3 root root 0 Sep 19 07:44 cpu2
-	drwxr-xr-x   3 root root 0 Sep 19 07:44 cpu3
-	drwxr-xr-x   3 root root 0 Sep 19 07:44 cpu4
-	drwxr-xr-x   3 root root 0 Sep 19 07:44 cpu5
-	drwxr-xr-x   3 root root 0 Sep 19 07:44 cpu6
-	drwxr-xr-x   3 root root 0 Sep 19 07:48 cpu7
-
-Under each directory you would find an "online" file which is the control
-file to logically online/offline a processor.
-
-Q: Does hot-add/hot-remove refer to physical add/remove of cpus?
-A: The usage of hot-add/remove may not be very consistently used in the code.
-CONFIG_HOTPLUG_CPU enables logical online/offline capability in the kernel.
-To support physical addition/removal, one would need some BIOS hooks and
-the platform should have something like an attention button in PCI hotplug.
-CONFIG_ACPI_HOTPLUG_CPU enables ACPI support for physical add/remove of CPUs.
-
-Q: How do I logically offline a CPU?
-A: Do the following.
-
-	#echo 0 > /sys/devices/system/cpu/cpuX/online
-
-Once the logical offline is successful, check
-
-	#cat /proc/interrupts
-
-You should now not see the CPU that you removed. Also online file will report
-the state as 0 when a CPU is offline and 1 when it's online.
-
-	#To display the current cpu state.
-	#cat /sys/devices/system/cpu/cpuX/online
-
-Q: Why can't I remove CPU0 on some systems?
-A: Some architectures may have some special dependency on a certain CPU.
-
-For e.g in IA64 platforms we have ability to send platform interrupts to the
-OS. a.k.a Corrected Platform Error Interrupts (CPEI). In current ACPI
-specifications, we didn't have a way to change the target CPU. Hence if the
-current ACPI version doesn't support such re-direction, we disable that CPU
-by making it not-removable.
-
-In such cases you will also notice that the online file is missing under cpu0.
-
-Q: Is CPU0 removable on X86?
-A: Yes. If kernel is compiled with CONFIG_BOOTPARAM_HOTPLUG_CPU0=y, CPU0 is
-removable by default. Otherwise, CPU0 is also removable by kernel option
-cpu0_hotplug.
-
-But some features depend on CPU0. Two known dependencies are:
-
-1. Resume from hibernate/suspend depends on CPU0. Hibernate/suspend will fail if
-CPU0 is offline and you need to online CPU0 before hibernate/suspend can
-continue.
-2. PIC interrupts also depend on CPU0. CPU0 can't be removed if a PIC interrupt
-is detected.
-
-It's said poweroff/reboot may depend on CPU0 on some machines although I haven't
-seen any poweroff/reboot failure so far after CPU0 is offline on a few tested
-machines.
-
-Please let me know if you know or see any other dependencies of CPU0.
-
-If the dependencies are under your control, you can turn on CPU0 hotplug feature
-either by CONFIG_BOOTPARAM_HOTPLUG_CPU0 or by kernel parameter cpu0_hotplug.
-
---Fenghua Yu <fenghua.yu@xxxxxxxxx>
-
-Q: How do I find out if a particular CPU is not removable?
-A: Depending on the implementation, some architectures may show this by the
-absence of the "online" file. This is done if it can be determined ahead of
-time that this CPU cannot be removed.
-
-In some situations, this can be a run time check, i.e if you try to remove the
-last CPU, this will not be permitted. You can find such failures by
-investigating the return value of the "echo" command.
-
-Q: What happens when a CPU is being logically offlined?
-A: The following happen, listed in no particular order :-)
-
-- A notification is sent to in-kernel registered modules by sending an event
-  CPU_DOWN_PREPARE or CPU_DOWN_PREPARE_FROZEN, depending on whether or not the
-  CPU is being offlined while tasks are frozen due to a suspend operation in
-  progress
-- All processes are migrated away from this outgoing CPU to new CPUs.
-  The new CPU is chosen from each process' current cpuset, which may be
-  a subset of all online CPUs.
-- All interrupts targeted to this CPU are migrated to a new CPU
-- timers/bottom half/task lets are also migrated to a new CPU
-- Once all services are migrated, kernel calls an arch specific routine
-  __cpu_disable() to perform arch specific cleanup.
-- Once this is successful, an event for successful cleanup is sent by an event
-  CPU_DEAD (or CPU_DEAD_FROZEN if tasks are frozen due to a suspend while the
-  CPU is being offlined).
-
-  "It is expected that each service cleans up when the CPU_DOWN_PREPARE
-  notifier is called, when CPU_DEAD is called it's expected there is nothing
-  running on behalf of this CPU that was offlined"
-
-Q: If I have some kernel code that needs to be aware of CPU arrival and
-   departure, how to i arrange for proper notification?
-A: This is what you would need in your kernel code to receive notifications.
-
-	#include <linux/cpu.h>
-	static int foobar_cpu_callback(struct notifier_block *nfb,
-				       unsigned long action, void *hcpu)
-	{
-		unsigned int cpu = (unsigned long)hcpu;
-
-		switch (action) {
-		case CPU_ONLINE:
-		case CPU_ONLINE_FROZEN:
-			foobar_online_action(cpu);
-			break;
-		case CPU_DEAD:
-		case CPU_DEAD_FROZEN:
-			foobar_dead_action(cpu);
-			break;
-		}
-		return NOTIFY_OK;
-	}
-
-	static struct notifier_block foobar_cpu_notifier =
-	{
-	   .notifier_call = foobar_cpu_callback,
-	};
-
-You need to call register_cpu_notifier() from your init function.
-Init functions could be of two types:
-1. early init (init function called when only the boot processor is online).
-2. late init (init function called _after_ all the CPUs are online).
-
-For the first case, you should add the following to your init function
-
-	register_cpu_notifier(&foobar_cpu_notifier);
-
-For the second case, you should add the following to your init function
-
-	register_hotcpu_notifier(&foobar_cpu_notifier);
-
-You can fail PREPARE notifiers if something doesn't work to prepare resources.
-This will stop the activity and send a following CANCELED event back.
-
-CPU_DEAD should not be failed, its just a goodness indication, but bad
-things will happen if a notifier in path sent a BAD notify code.
-
-Q: I don't see my action being called for all CPUs already up and running?
-A: Yes, CPU notifiers are called only when new CPUs are on-lined or offlined.
-   If you need to perform some action for each CPU already in the system, then
-   do this:
-
-	for_each_online_cpu(i) {
-		foobar_cpu_callback(&foobar_cpu_notifier, CPU_UP_PREPARE, i);
-		foobar_cpu_callback(&foobar_cpu_notifier, CPU_ONLINE, i);
-	}
-
-   However, if you want to register a hotplug callback, as well as perform
-   some initialization for CPUs that are already online, then do this:
-
-   Version 1: (Correct)
-   ---------
-
-   	cpu_notifier_register_begin();
-
-		for_each_online_cpu(i) {
-			foobar_cpu_callback(&foobar_cpu_notifier,
-					    CPU_UP_PREPARE, i);
-			foobar_cpu_callback(&foobar_cpu_notifier,
-					    CPU_ONLINE, i);
-		}
-
-	/* Note the use of the double underscored version of the API */
-	__register_cpu_notifier(&foobar_cpu_notifier);
-
-	cpu_notifier_register_done();
-
-   Note that the following code is *NOT* the right way to achieve this,
-   because it is prone to an ABBA deadlock between the cpu_add_remove_lock
-   and the cpu_hotplug.lock.
-
-   Version 2: (Wrong!)
-   ---------
-
-	get_online_cpus();
-
-		for_each_online_cpu(i) {
-			foobar_cpu_callback(&foobar_cpu_notifier,
-					    CPU_UP_PREPARE, i);
-			foobar_cpu_callback(&foobar_cpu_notifier,
-					    CPU_ONLINE, i);
-		}
-
-	register_cpu_notifier(&foobar_cpu_notifier);
-
-	put_online_cpus();
-
-    So always use the first version shown above when you want to register
-    callbacks as well as initialize the already online CPUs.
-
-
-Q: If I would like to develop CPU hotplug support for a new architecture,
-   what do I need at a minimum?
-A: The following are what is required for CPU hotplug infrastructure to work
-   correctly.
-
-    - Make sure you have an entry in Kconfig to enable CONFIG_HOTPLUG_CPU
-    - __cpu_up()        - Arch interface to bring up a CPU
-    - __cpu_disable()   - Arch interface to shutdown a CPU, no more interrupts
-                          can be handled by the kernel after the routine
-                          returns. Including local APIC timers etc are
-                          shutdown.
-     - __cpu_die()      - This actually supposed to ensure death of the CPU.
-                          Actually look at some example code in other arch
-                          that implement CPU hotplug. The processor is taken
-                          down from the idle() loop for that specific
-                          architecture. __cpu_die() typically waits for some
-                          per_cpu state to be set, to ensure the processor
-                          dead routine is called to be sure positively.
-
-Q: I need to ensure that a particular CPU is not removed when there is some
-   work specific to this CPU in progress.
-A: There are two ways.  If your code can be run in interrupt context, use
-   smp_call_function_single(), otherwise use work_on_cpu().  Note that
-   work_on_cpu() is slow, and can fail due to out of memory:
-
-	int my_func_on_cpu(int cpu)
-	{
-		int err;
-		get_online_cpus();
-		if (!cpu_online(cpu))
-			err = -EINVAL;
-		else
-#if NEEDS_BLOCKING
-			err = work_on_cpu(cpu, __my_func_on_cpu, NULL);
-#else
-			smp_call_function_single(cpu, __my_func_on_cpu, &err,
-						 true);
-#endif
-		put_online_cpus();
-		return err;
-	}
-
-Q: How do we determine how many CPUs are available for hotplug.
-A: There is no clear spec defined way from ACPI that can give us that
-   information today. Based on some input from Natalie of Unisys,
-   that the ACPI MADT (Multiple APIC Description Tables) marks those possible
-   CPUs in a system with disabled status.
-
-   Andi implemented some simple heuristics that count the number of disabled
-   CPUs in MADT as hotpluggable CPUS.  In the case there are no disabled CPUS
-   we assume 1/2 the number of CPUs currently present can be hotplugged.
-
-   Caveat: ACPI MADT can only provide 256 entries in systems with only ACPI 2.0c
-   or earlier ACPI version supported, because the apicid field in MADT is only
-   8 bits. From ACPI 3.0, this limitation was removed since the apicid field
-   was extended to 32 bits with x2APIC introduced.
-
-User Space Notification
-
-Hotplug support for devices is common in Linux today. Its being used today to
-support automatic configuration of network, usb and pci devices. A hotplug
-event can be used to invoke an agent script to perform the configuration task.
-
-You can add /etc/hotplug/cpu.agent to handle hotplug notification user space
-scripts.
-
-	#!/bin/bash
-	# $Id: cpu.agent
-	# Kernel hotplug params include:
-	#ACTION=%s [online or offline]
-	#DEVPATH=%s
-	#
-	cd /etc/hotplug
-	. ./hotplug.functions
-
-	case $ACTION in
-		online)
-			echo `date` ":cpu.agent" add cpu >> /tmp/hotplug.txt
-			;;
-		offline)
-			echo `date` ":cpu.agent" remove cpu >>/tmp/hotplug.txt
-			;;
-		*)
-			debug_mesg CPU $ACTION event not supported
-        exit 1
-        ;;
-	esac
-- 
2.11.0

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux