The cpu_opv system call executes a vector of operations on behalf of user-space on a specific CPU with preemption disabled. It is inspired by readv() and writev() system calls which take a "struct iovec" array as argument. The operations available are: comparison, memcpy, add, or, and, xor, left shift, right shift, and memory barrier. The system call receives a CPU number from user-space as argument, which is the CPU on which those operations need to be performed. All pointers in the ops must have been set up to point to the per CPU memory of the CPU on which the operations should be executed. The "comparison" operation can be used to check that the data used in the preparation step did not change between preparation of system call inputs and operation execution within the preempt-off critical section. The reason why we require all pointer offsets to be calculated by user-space beforehand is because we need to use get_user_pages_fast() to first pin all pages touched by each operation. This takes care of faulting-in the pages. Then, preemption is disabled, and the operations are performed atomically with respect to other thread execution on that CPU, without generating any page fault. An overall maximum of 4216 bytes in enforced on the sum of operation length within an operation vector, so user-space cannot generate a too long preempt-off critical section (cache cold critical section duration measured as 4.7µs on x86-64). Each operation is also limited a length of 4096 bytes, meaning that an operation can touch a maximum of 4 pages (memcpy: 2 pages for source, 2 pages for destination if addresses are not aligned on page boundaries). If the thread is not running on the requested CPU, it is migrated to it. **** Justification for rseq **** Here are a few reasons justifying why the cpu_opv system call is needed in addition to rseq: 1) Handling single-stepping from tools Tools like debuggers, and simulators use single-stepping to run through existing programs. If core libraries start to use restartable sequences for e.g. memory allocation, this means pre-existing programs cannot be single-stepped, simply because the underlying glibc or jemalloc has changed. The rseq user-space does expose a __rseq_table section for the sake of debuggers, so they can skip over the rseq critical sections if they want. However, this requires upgrading tools, and still breaks single-stepping in case where glibc or jemalloc is updated, but not the tooling. Having a performance-related library improvement break tooling is likely to cause a big push-back against wide adoption of rseq. 2) Forward-progress guarantee Having a piece of user-space code that stops progressing due to external conditions is pretty bad. Developers are used to think of fast-path and slow-path (e.g. for locking), where the contended vs uncontended cases have different performance characteristics, but each need to provide some level of progress guarantees. There are concerns about proposing just "rseq" without the associated slow-path (cpu_opv) that guarantees progress. It's just asking for trouble when real-life will happen: page faults, uprobes, and other unforeseen conditions that would seldom cause a rseq fast-path to never progress. 3) Handling page faults It's pretty easy to come up with corner-case scenarios where rseq does not progress without the help from cpu_opv. For instance, a system with swap enabled which is under high memory pressure could trigger page faults at pretty much every rseq attempt. Although this scenario is extremely unlikely, rseq becomes the weak link of the chain. 4) Comparison with LL/SC The layman versed in the load-link/store-conditional instructions in RISC architectures will notice the similarity between rseq and LL/SC critical sections. The comparison can even be pushed further: since debuggers can handle those LL/SC critical sections, they should be able to handle rseq c.s. in the same way. First, the way gdb recognises LL/SC c.s. patterns is very fragile: it's limited to specific common patterns, and will miss the pattern in all other cases. But fear not, having the rseq c.s. expose a __rseq_table to debuggers removes that guessing part. The main difference between LL/SC and rseq is that debuggers had to support single-stepping through LL/SC critical sections from the get go in order to support a given architecture. For rseq, we're adding critical sections into pre-existing applications/libraries, so the user expectation is that tools don't break due to a library optimization. 5) Perform maintenance operations on per-cpu data rseq c.s. are quite limited feature-wise: they need to end with a *single* commit instruction that updates a memory location. On the other hand, the cpu_opv system call can combine a sequence of operations that need to be executed with preemption disabled. While slower than rseq, this allows for more complex maintenance operations to be performed on per-cpu data concurrently with rseq fast-paths, in cases where it's not possible to map those sequences of ops to a rseq. 6) Use cpu_opv as generic implementation for architectures not implementing rseq assembly code rseq critical sections require architecture-specific user-space code to be crafted in order to port an algorithm to a given architecture. In addition, it requires that the kernel architecture implementation adds hooks into signal delivery and resume to user-space. In order to facilitate integration of rseq into user-space, cpu_opv can provide a (relatively slower) architecture-agnostic implementation of rseq. This means that user-space code can be ported to all architectures through use of cpu_opv initially, and have the fast-path use rseq whenever the asm code is implemented. 7) Allow libraries with multi-part algorithms to work on same per-cpu data without affecting the allowed cpu mask The lttng-ust tracer presents an interesting use-case for per-cpu buffers: the algorithm needs to update a "reserve" counter, serialize data into the buffer, and then update a "commit" counter _on the same per-cpu buffer_. Using rseq for both reserve and commit can bring significant performance benefits. Clearly, if rseq reserve fails, the algorithm can retry on a different per-cpu buffer. However, it's not that easy for the commit. It needs to be performed on the same per-cpu buffer as the reserve. The cpu_opv system call solves that problem by receiving the cpu number on which the operation needs to be performed as argument. It can push the task to the right CPU if needed, and perform the operations there with preemption disabled. Changing the allowed cpu mask for the current thread is not an acceptable alternative for a tracing library, because the application being traced does not expect that mask to be changed by libraries. 8) Ensure that data structures don't need store-release/load-acquire semantic to handle fall-back cpu_opv performs the fall-back on the requested CPU by migrating the task to that CPU. Executing the slow-path on the right CPU ensures that store-release/load-acquire semantic is not required neither on the fast-path nor slow-path. **** rseq and cpu_opv use-cases **** 1) per-cpu spinlock A per-cpu spinlock can be implemented as a rseq consisting of a comparison operation (== 0) on a word, and a word store (1), followed by an acquire barrier after control dependency. The unlock path can be performed with a simple store-release of 0 to the word, which does not require rseq. The cpu_opv fallback requires a single-word comparison (== 0) and a single-word store (1). 2) per-cpu statistics counters A per-cpu statistics counters can be implemented as a rseq consisting of a final "add" instruction on a word as commit. The cpu_opv fallback can be implemented as a "ADD" operation. Besides statistics tracking, these counters can be used to implement user-space RCU per-cpu grace period tracking for both single and multi-process user-space RCU. 3) per-cpu LIFO linked-list (unlimited size stack) A per-cpu LIFO linked-list has a "push" and "pop" operation, which respectively adds an item to the list, and removes an item from the list. The "push" operation can be implemented as a rseq consisting of a word comparison instruction against head followed by a word store (commit) to head. Its cpu_opv fallback can be implemented as a word-compare followed by word-store as well. The "pop" operation can be implemented as a rseq consisting of loading head, comparing it against NULL, loading the next pointer at the right offset within the head item, and the next pointer as a new head, returning the old head on success. The cpu_opv fallback for "pop" differs from its rseq algorithm: considering that cpu_opv requires to know all pointers at system call entry so it can pin all pages, so cpu_opv cannot simply load head and then load the head->next address within the preempt-off critical section. User-space needs to pass the head and head->next addresses to the kernel, and the kernel needs to check that the head address is unchanged since it has been loaded by user-space. However, when accessing head->next in a ABA situation, it's possible that head is unchanged, but loading head->next can result in a page fault due to a concurrently freed head object. This is why the "expect_fault" operation field is introduced: if a fault is triggered by this access, "-EAGAIN" will be returned by cpu_opv rather than -EFAULT, thus indicating the the operation vector should be attempted again. The "pop" operation can thus be implemented as a word comparison of head against the head loaded by user-space, followed by a load of the head->next pointer (which may fault), and a store of that pointer as a new head. 4) per-cpu LIFO ring buffer with pointers to objects (fixed-sized stack) This structure is useful for passing around allocated objects by passing pointers through per-cpu fixed-sized stack. The "push" side can be implemented with a check of the current offset against the maximum buffer length, followed by a rseq consisting of a comparison of the previously loaded offset against the current offset, a word "try store" operation into the next ring buffer array index (it's OK to abort after a try-store, since it's not the commit, and its side-effect can be overwritten), then followed by a word-store to increment the current offset (commit). The "push" cpu_opv fallback can be done with the comparison, and two consecutive word stores, all within the preempt-off section. The "pop" side can be implemented with a check that offset is not 0 (whether the buffer is empty), a load of the "head" pointer before the offset array index, followed by a rseq consisting of a word comparison checking that the offset is unchanged since previously loaded, another check ensuring that the "head" pointer is unchanged, followed by a store decrementing the current offset. The cpu_opv "pop" can be implemented with the same algorithm as the rseq fast-path (compare, compare, store). 5) per-cpu LIFO ring buffer with pointers to objects (fixed-sized stack) supporting "peek" from remote CPU In order to implement work queues with work-stealing between CPUs, it is useful to ensure the offset "commit" in scenario 4) "push" have a store-release semantic, thus allowing remote CPU to load the offset with acquire semantic, and load the top pointer, in order to check if work-stealing should be performed. The task (work queue item) existence should be protected by other means, e.g. RCU. If the peek operation notices that work-stealing should indeed be performed, a thread can use cpu_opv to move the task between per-cpu workqueues, by first invoking cpu_opv passing the remote work queue cpu number as argument to pop the task, and then again as "push" with the target work queue CPU number. 6) per-cpu LIFO ring buffer with data copy (fixed-sized stack) (with and without acquire-release) This structure is useful for passing around data without requiring memory allocation by copying the data content into per-cpu fixed-sized stack. The "push" operation is performed with an offset comparison against the buffer size (figuring out if the buffer is full), followed by a rseq consisting of a comparison of the offset, a try-memcpy attempting to copy the data content into the buffer (which can be aborted and overwritten), and a final store incrementing the offset. The cpu_opv fallback needs to same operations, except that the memcpy is guaranteed to complete, given that it is performed with preemption disabled. This requires a memcpy operation supporting length up to 4kB. The "pop" operation is similar to the "push, except that the offset is first compared to 0 to ensure the buffer is not empty. The copy source is the ring buffer, and the destination is an output buffer. 7) per-cpu FIFO ring buffer (fixed-sized queue) This structure is useful wherever a FIFO behavior (queue) is needed. One major use-case is tracer ring buffer. An implementation of this ring buffer has a "reserve", followed by serialization of multiple bytes into the buffer, ended by a "commit". The "reserve" can be implemented as a rseq consisting of a word comparison followed by a word store. The reserve operation moves the producer "head". The multi-byte serialization can be performed non-atomically. Finally, the "commit" update can be performed with a rseq "add" commit instruction with store-release semantic. The ring buffer consumer reads the commit value with load-acquire semantic to know whenever it is safe to read from the ring buffer. This use-case requires that both "reserve" and "commit" operations be performed on the same per-cpu ring buffer, even if a migration happens between those operations. In the typical case, both operations will happens on the same CPU and use rseq. In the unlikely event of a migration, the cpu_opv system call will ensure the commit can be performed on the right CPU by migrating the task to that CPU. On the consumer side, an alternative to using store-release and load-acquire on the commit counter would be to use cpu_opv to ensure the commit counter load is performed on the right CPU. This effectively allows moving a consumer thread between CPUs to execute close to the ring buffer cache lines it will read. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> CC: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx> CC: Peter Zijlstra <peterz@xxxxxxxxxxxxx> CC: Paul Turner <pjt@xxxxxxxxxx> CC: Thomas Gleixner <tglx@xxxxxxxxxxxxx> CC: Andrew Hunter <ahh@xxxxxxxxxx> CC: Andy Lutomirski <luto@xxxxxxxxxxxxxx> CC: Andi Kleen <andi@xxxxxxxxxxxxxx> CC: Dave Watson <davejwatson@xxxxxx> CC: Chris Lameter <cl@xxxxxxxxx> CC: Ingo Molnar <mingo@xxxxxxxxxx> CC: "H. Peter Anvin" <hpa@xxxxxxxxx> CC: Ben Maurer <bmaurer@xxxxxx> CC: Steven Rostedt <rostedt@xxxxxxxxxxx> CC: Josh Triplett <josh@xxxxxxxxxxxxxxxx> CC: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> CC: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> CC: Russell King <linux@xxxxxxxxxxxxxxxx> CC: Catalin Marinas <catalin.marinas@xxxxxxx> CC: Will Deacon <will.deacon@xxxxxxx> CC: Michael Kerrisk <mtk.manpages@xxxxxxxxx> CC: Boqun Feng <boqun.feng@xxxxxxxxx> CC: linux-api@xxxxxxxxxxxxxxx --- Changes since v1: - handle CPU hotplug, - cleanup implementation using function pointers: We can use function pointers to implement the operations rather than duplicating all the user-access code. - refuse device pages: Performing cpu_opv operations on io map'd pages with preemption disabled could generate long preempt-off critical sections, which leads to unwanted scheduler latency. Return EFAULT if a device page is received as parameter - restrict op vector to 4216 bytes length sum: Restrict the operation vector to length sum of: - 4096 bytes (typical page size on most architectures, should be enough for a string, or structures) - 15 * 8 bytes (typical operations on integers or pointers). The goal here is to keep the duration of preempt off critical section short, so we don't add significant scheduler latency. - Add INIT_ONSTACK macro: Introduce the CPU_OP_FIELD_u32_u64_INIT_ONSTACK() macros to ensure that users correctly initialize the upper bits of CPU_OP_FIELD_u32_u64() on their stack to 0 on 32-bit architectures. - Add CPU_MB_OP operation: Use-cases with: - two consecutive stores, - a mempcy followed by a store, require a memory barrier before the final store operation. A typical use-case is a store-release on the final store. Given that this is a slow path, just providing an explicit full barrier instruction should be sufficient. - Add expect fault field: The use-case of list_pop brings interesting challenges. With rseq, we can use rseq_cmpnev_storeoffp_load(), and therefore load a pointer, compare it against NULL, add an offset, and load the target "next" pointer from the object, all within a single req critical section. Life is not so easy for cpu_opv in this use-case, mainly because we need to pin all pages we are going to touch in the preempt-off critical section beforehand. So we need to know the target object (in which we apply an offset to fetch the next pointer) when we pin pages before disabling preemption. So the approach is to load the head pointer and compare it against NULL in user-space, before doing the cpu_opv syscall. User-space can then compute the address of the head->next field, *without loading it*. The cpu_opv system call will first need to pin all pages associated with input data. This includes the page backing the head->next object, which may have been concurrently deallocated and unmapped. Therefore, in this case, getting -EFAULT when trying to pin those pages may happen: it just means they have been concurrently unmapped. This is an expected situation, and should just return -EAGAIN to user-space, to user-space can distinguish between "should retry" type of situations and actual errors that should be handled with extreme prejudice to the program (e.g. abort()). Therefore, add "expect_fault" fields along with op input address pointers, so user-space can identify whether a fault when getting a field should return EAGAIN rather than EFAULT. - Add compiler barrier between operations: Adding a compiler barrier between store operations in a cpu_opv sequence can be useful when paired with membarrier system call. An algorithm with a paired slow path and fast path can use sys_membarrier on the slow path to replace fast-path memory barriers by compiler barrier. Adding an explicit compiler barrier between operations allows cpu_opv to be used as fallback for operations meant to match the membarrier system call. Changes since v2: - Fix memory leak by introducing struct cpu_opv_pinned_pages. Suggested by Boqun Feng. - Cast argument 1 passed to access_ok from integer to void __user *, fixing sparse warning. Changes since v3: - Fix !SMP by adding push_task_to_cpu() empty static inline. - Add missing sys_cpu_opv() asmlinkage declaration to include/linux/syscalls.h. Changes since v4: - Cleanup based on Thomas Gleixner's feedback. - Fault-in pages which are not faulted in yet (e.g. zero pages). - Handle retry in case where the scheduler migrates the thread away from the target CPU after migration within the syscall rather than returning EAGAIN to user-space. - Move push_task_to_cpu() to its own patch. --- Man page associated: CPU_OPV(2) Linux Programmer's Manual CPU_OPV(2) NAME cpu_opv - CPU preempt-off operation vector system call SYNOPSIS #include <linux/cpu_opv.h> int cpu_opv(struct cpu_op * cpu_opv, int cpuopcnt, int cpu, int flags); DESCRIPTION The cpu_opv system call executes a vector of operations on behalf of user-space on a specific CPU with preemption disabled. The operations available are: comparison, memcpy, add, or, and, xor, left shift, right shift, and memory barrier. The system call receives a CPU number from user-space as argument, which is the CPU on which those operations need to be performed. All pointers in the ops must have been set up to point to the per CPU memory of the CPU on which the operations should be executed. The "com‐ parison" operation can be used to check that the data used in the preparation step did not change between preparation of system call inputs and operation execution within the preempt-off criti‐ cal section. An overall maximum of 4216 bytes in enforced on the sum of opera‐ tion length within an operation vector, so user-space cannot gen‐ erate a too long preempt-off critical section. Each operation is also limited a length of 4096 bytes. A maximum limit of 16 opera‐ tions per cpu_opv syscall invocation is enforced. If the thread is not running on the requested CPU, it is migrated to it. The layout of struct cpu_opv is as follows: Fields op Operation of type enum cpu_op_type to perform. This opera‐ tion type selects the associated "u" union field. len Length (in bytes) of data to consider for this operation. u.compare_op For a CPU_COMPARE_EQ_OP , and CPU_COMPARE_NE_OP , contains the a and b pointers to compare. The expect_fault_a and expect_fault_b fields indicate whether a page fault should be expected for each of those pointers. If expect_fault_a , or expect_fault_b is set, EAGAIN is returned on fault, else EFAULT is returned. The len field is allowed to take values from 0 to 4096 for comparison operations. u.memcpy_op For a CPU_MEMCPY_OP , contains the dst and src pointers, expressing a copy of src into dst. The expect_fault_dst and expect_fault_src fields indicate whether a page fault should be expected for each of those pointers. If expect_fault_dst , or expect_fault_src is set, EAGAIN is returned on fault, else EFAULT is returned. The len field is allowed to take values from 0 to 4096 for memcpy opera‐ tions. u.arithmetic_op For a CPU_ADD_OP , contains the p , count , and expect_fault_p fields, which are respectively a pointer to the memory location to increment, the 64-bit signed inte‐ ger value to add, and whether a page fault should be expected for p . If expect_fault_p is set, EAGAIN is returned on fault, else EFAULT is returned. The len field is allowed to take values of 1, 2, 4, 8 bytes for arith‐ metic operations. u.bitwise_op For a CPU_OR_OP , CPU_AND_OP , and CPU_XOR_OP , contains the p , mask , and expect_fault_p fields, which are respectively a pointer to the memory location to target, the mask to apply, and whether a page fault should be expected for p . If expect_fault_p is set, EAGAIN is returned on fault, else EFAULT is returned. The len field is allowed to take values of 1, 2, 4, 8 bytes for bitwise operations. u.shift_op For a CPU_LSHIFT_OP , and CPU_RSHIFT_OP , contains the p , bits , and expect_fault_p fields, which are respectively a pointer to the memory location to target, the number of bits to shift either left of right, and whether a page fault should be expected for p . If expect_fault_p is set, EAGAIN is returned on fault, else EFAULT is returned. The len field is allowed to take values of 1, 2, 4, 8 bytes for shift operations. The bits field is allowed to take values between 0 and 63. The enum cpu_op_types contains the following operations: · CPU_COMPARE_EQ_OP: Compare whether two memory locations are equal, · CPU_COMPARE_NE_OP: Compare whether two memory locations differ, · CPU_MEMCPY_OP: Copy a source memory location into a destina‐ tion, · CPU_ADD_OP: Increment a target memory location of a given count, · CPU_OR_OP: Apply a "or" mask to a memory location, · CPU_AND_OP: Apply a "and" mask to a memory location, · CPU_XOR_OP: Apply a "xor" mask to a memory location, · CPU_LSHIFT_OP: Shift a memory location left of a given number of bits, · CPU_RSHIFT_OP: Shift a memory location right of a given number of bits. · CPU_MB_OP: Issue a memory barrier. All of the operations above provide single-copy atomicity guar‐ antees for word-sized, word-aligned target pointers, for both loads and stores. The cpuopcnt argument is the number of elements in the cpu_opv array. It can take values from 0 to 16. The cpu argument is the CPU number on which the operation sequence needs to be executed. The flags argument is expected to be 0. RETURN VALUE A return value of 0 indicates success. On error, -1 is returned, and errno is set appropriately. If a comparison operation fails, execution of the operation vector is stopped, and the return value is the index after the comparison operation (values between 1 and 16). ERRORS EAGAIN cpu_opv() system call should be attempted again. EINVAL Either flags contains an invalid value, or cpu contains an invalid value or a value not allowed by the current thread's allowed cpu mask, or cpuopcnt contains an invalid value, or the cpu_opv operation vector contains an invalid op value, or the cpu_opv operation vector contains an invalid len value, or the cpu_opv operation vector sum of len values is too large. ENOSYS The cpu_opv() system call is not implemented by this ker‐ nel. EFAULT cpu_opv is an invalid address, or a pointer contained within an operation is invalid (and a fault is not expected for that pointer). VERSIONS The cpu_opv() system call was added in Linux 4.X (TODO). CONFORMING TO cpu_opv() is Linux-specific. SEE ALSO membarrier(2), rseq(2) Linux 2017-11-10 CPU_OPV(2) --- MAINTAINERS | 7 + include/linux/syscalls.h | 3 + include/uapi/linux/cpu_opv.h | 114 +++++ init/Kconfig | 14 + kernel/Makefile | 1 + kernel/cpu_opv.c | 1060 ++++++++++++++++++++++++++++++++++++++++++ kernel/sys_ni.c | 1 + 7 files changed, 1200 insertions(+) create mode 100644 include/uapi/linux/cpu_opv.h create mode 100644 kernel/cpu_opv.c diff --git a/MAINTAINERS b/MAINTAINERS index b8f6a99005b4..0b4e504f5003 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3728,6 +3728,13 @@ B: https://bugzilla.kernel.org F: drivers/cpuidle/* F: include/linux/cpuidle.h +CPU NON-PREEMPTIBLE OPERATION VECTOR SUPPORT +M: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> +L: linux-kernel@xxxxxxxxxxxxxxx +S: Supported +F: kernel/cpu_opv.c +F: include/uapi/linux/cpu_opv.h + CRAMFS FILESYSTEM M: Nicolas Pitre <nico@xxxxxxxxxx> S: Maintained diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index 340650b4ec54..32d289f41f62 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -67,6 +67,7 @@ struct perf_event_attr; struct file_handle; struct sigaltstack; struct rseq; +struct cpu_op; union bpf_attr; #include <linux/types.h> @@ -943,5 +944,7 @@ asmlinkage long sys_statx(int dfd, const char __user *path, unsigned flags, unsigned mask, struct statx __user *buffer); asmlinkage long sys_rseq(struct rseq __user *rseq, uint32_t rseq_len, int flags, uint32_t sig); +asmlinkage long sys_cpu_opv(struct cpu_op __user *ucpuopv, int cpuopcnt, + int cpu, int flags); #endif diff --git a/include/uapi/linux/cpu_opv.h b/include/uapi/linux/cpu_opv.h new file mode 100644 index 000000000000..ccd8167fc189 --- /dev/null +++ b/include/uapi/linux/cpu_opv.h @@ -0,0 +1,114 @@ +#ifndef _UAPI_LINUX_CPU_OPV_H +#define _UAPI_LINUX_CPU_OPV_H + +/* + * linux/cpu_opv.h + * + * CPU preempt-off operation vector system call API + * + * Copyright (c) 2017 Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + * SOFTWARE. + */ + +#ifdef __KERNEL__ +# include <linux/types.h> +#else +# include <stdint.h> +#endif + +#include <linux/types_32_64.h> + +#define CPU_OP_VEC_LEN_MAX 16 +#define CPU_OP_ARG_LEN_MAX 24 +/* Maximum data len per operation. */ +#define CPU_OP_DATA_LEN_MAX 4096 +/* + * Maximum data len for overall vector. Restrict the amount of user-space + * data touched by the kernel in non-preemptible context, so it does not + * introduce long scheduler latencies. + * This allows one copy of up to 4096 bytes, and 15 operations touching 8 + * bytes each. + * This limit is applied to the sum of length specified for all operations + * in a vector. + */ +#define CPU_OP_MEMCPY_EXPECT_LEN 4096 +#define CPU_OP_EXPECT_LEN 8 +#define CPU_OP_VEC_DATA_LEN_MAX \ + (CPU_OP_MEMCPY_EXPECT_LEN + \ + (CPU_OP_VEC_LEN_MAX - 1) * CPU_OP_EXPECT_LEN) + +enum cpu_op_type { + /* compare */ + CPU_COMPARE_EQ_OP, + CPU_COMPARE_NE_OP, + /* memcpy */ + CPU_MEMCPY_OP, + /* arithmetic */ + CPU_ADD_OP, + /* bitwise */ + CPU_OR_OP, + CPU_AND_OP, + CPU_XOR_OP, + /* shift */ + CPU_LSHIFT_OP, + CPU_RSHIFT_OP, + /* memory barrier */ + CPU_MB_OP, +}; + +/* Vector of operations to perform. Limited to 16. */ +struct cpu_op { + /* enum cpu_op_type. */ + int32_t op; + /* data length, in bytes. */ + uint32_t len; + union { + struct { + LINUX_FIELD_u32_u64(a); + LINUX_FIELD_u32_u64(b); + uint8_t expect_fault_a; + uint8_t expect_fault_b; + } compare_op; + struct { + LINUX_FIELD_u32_u64(dst); + LINUX_FIELD_u32_u64(src); + uint8_t expect_fault_dst; + uint8_t expect_fault_src; + } memcpy_op; + struct { + LINUX_FIELD_u32_u64(p); + int64_t count; + uint8_t expect_fault_p; + } arithmetic_op; + struct { + LINUX_FIELD_u32_u64(p); + uint64_t mask; + uint8_t expect_fault_p; + } bitwise_op; + struct { + LINUX_FIELD_u32_u64(p); + uint32_t bits; + uint8_t expect_fault_p; + } shift_op; + char __padding[CPU_OP_ARG_LEN_MAX]; + } u; +}; + +#endif /* _UAPI_LINUX_CPU_OPV_H */ diff --git a/init/Kconfig b/init/Kconfig index 88e36395390f..acf678e2363c 100644 --- a/init/Kconfig +++ b/init/Kconfig @@ -1404,6 +1404,7 @@ config RSEQ bool "Enable rseq() system call" if EXPERT default y depends on HAVE_RSEQ + select CPU_OPV select MEMBARRIER help Enable the restartable sequences system call. It provides a @@ -1414,6 +1415,19 @@ config RSEQ If unsure, say Y. +config CPU_OPV + bool "Enable cpu_opv() system call" if EXPERT + default y + help + Enable the CPU preempt-off operation vector system call. + It allows user-space to perform a sequence of operations on + per-cpu data with preemption disabled. Useful as + single-stepping fall-back for restartable sequences, and for + performing more complex operations on per-cpu data that would + not be otherwise possible to do with restartable sequences. + + If unsure, say Y. + config EMBEDDED bool "Embedded system" option allnoconfig_y diff --git a/kernel/Makefile b/kernel/Makefile index 3574669dafd9..cac8855196ff 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -113,6 +113,7 @@ obj-$(CONFIG_TORTURE_TEST) += torture.o obj-$(CONFIG_HAS_IOMEM) += memremap.o obj-$(CONFIG_RSEQ) += rseq.o +obj-$(CONFIG_CPU_OPV) += cpu_opv.o $(obj)/configs.o: $(obj)/config_data.h diff --git a/kernel/cpu_opv.c b/kernel/cpu_opv.c new file mode 100644 index 000000000000..1b921ae35088 --- /dev/null +++ b/kernel/cpu_opv.c @@ -0,0 +1,1060 @@ +/* + * CPU preempt-off operation vector system call + * + * It allows user-space to perform a sequence of operations on per-cpu + * data with preemption disabled. Useful as single-stepping fall-back + * for restartable sequences, and for performing more complex operations + * on per-cpu data that would not be otherwise possible to do with + * restartable sequences. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * Copyright (C) 2017, EfficiOS Inc., + * Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> + */ + +#include <linux/sched.h> +#include <linux/uaccess.h> +#include <linux/syscalls.h> +#include <linux/cpu_opv.h> +#include <linux/types.h> +#include <linux/mutex.h> +#include <linux/pagemap.h> +#include <linux/mm.h> +#include <asm/ptrace.h> +#include <asm/byteorder.h> + +#include "sched/sched.h" + +/* + * Typical invocation of cpu_opv need few pages. Keep struct page + * pointers in an array on the stack of the cpu_opv system call up to + * this limit, beyond which the array is dynamically allocated. + */ +#define NR_PAGE_PTRS_ON_STACK 8 + +/* Maximum pages per op. */ +#define CPU_OP_MAX_PAGES 4 + +/* Temporary on-stack buffer size for memcpy and compare operations. */ +#define TMP_BUFLEN 64 + +union op_fn_data { + uint8_t _u8; + uint16_t _u16; + uint32_t _u32; + uint64_t _u64; +#if (BITS_PER_LONG < 64) + uint32_t _u64_split[2]; +#endif +}; + +struct cpu_opv_page_ptrs { + struct page **pages; + size_t nr; + bool is_kmalloc; +}; + +typedef int (*op_fn_t)(union op_fn_data *data, uint64_t v, uint32_t len); + +/* + * Provide mutual exclution for threads executing a cpu_opv against an + * offline CPU. + */ +static DEFINE_MUTEX(cpu_opv_offline_lock); + +/* + * The cpu_opv system call executes a vector of operations on behalf of + * user-space on a specific CPU with preemption disabled. It is inspired + * by readv() and writev() system calls which take a "struct iovec" + * array as argument. + * + * The operations available are: comparison, memcpy, add, or, and, xor, + * left shift, right shift, and memory barrier. The system call receives + * a CPU number from user-space as argument, which is the CPU on which + * those operations need to be performed. All pointers in the ops must + * have been set up to point to the per CPU memory of the CPU on which + * the operations should be executed. The "comparison" operation can be + * used to check that the data used in the preparation step did not + * change between preparation of system call inputs and operation + * execution within the preempt-off critical section. + * + * The reason why we require all pointer offsets to be calculated by + * user-space beforehand is because we need to use get_user_pages_fast() + * to first pin all pages touched by each operation. This takes care of + * faulting-in the pages. Then, preemption is disabled, and the + * operations are performed atomically with respect to other thread + * execution on that CPU, without generating any page fault. + * + * An overall maximum of 4216 bytes in enforced on the sum of operation + * length within an operation vector, so user-space cannot generate a + * too long preempt-off critical section (cache cold critical section + * duration measured as 4.7µs on x86-64). Each operation is also limited + * a length of 4096 bytes, meaning that an operation can touch a + * maximum of 4 pages (memcpy: 2 pages for source, 2 pages for + * destination if addresses are not aligned on page boundaries). + * + * If the thread is not running on the requested CPU, it is migrated to + * it. + */ + +static unsigned long cpu_op_range_nr_pages(unsigned long addr, + unsigned long len) +{ + return ((addr + len - 1) >> PAGE_SHIFT) - (addr >> PAGE_SHIFT) + 1; +} + +static int cpu_op_count_pages(unsigned long addr, unsigned long len) +{ + unsigned long nr_pages; + + if (!len) + return 0; + nr_pages = cpu_op_range_nr_pages(addr, len); + if (nr_pages > 2) { + WARN_ON(1); + return -EINVAL; + } + return nr_pages; +} + +static struct page **cpu_op_alloc_pages_vector(int nr_pages) +{ + return kzalloc(nr_pages * sizeof(struct page *), GFP_KERNEL); +} + +/* + * Check operation types and length parameters. Count number of pages. + */ +static int cpu_opv_check_op(struct cpu_op *op, int *nr_pages, uint32_t *sum) +{ + int ret; + + switch (op->op) { + case CPU_MB_OP: + break; + default: + *sum += op->len; + } + + /* Validate inputs. */ + switch (op->op) { + case CPU_COMPARE_EQ_OP: + case CPU_COMPARE_NE_OP: + case CPU_MEMCPY_OP: + if (op->len > CPU_OP_DATA_LEN_MAX) + return -EINVAL; + break; + case CPU_ADD_OP: + case CPU_OR_OP: + case CPU_AND_OP: + case CPU_XOR_OP: + switch (op->len) { + case 1: + case 2: + case 4: + case 8: + break; + default: + return -EINVAL; + } + break; + case CPU_LSHIFT_OP: + case CPU_RSHIFT_OP: + switch (op->len) { + case 1: + if (op->u.shift_op.bits > 7) + return -EINVAL; + break; + case 2: + if (op->u.shift_op.bits > 15) + return -EINVAL; + break; + case 4: + if (op->u.shift_op.bits > 31) + return -EINVAL; + break; + case 8: + if (op->u.shift_op.bits > 63) + return -EINVAL; + break; + default: + return -EINVAL; + } + break; + case CPU_MB_OP: + break; + default: + return -EINVAL; + } + + /* Count pages. */ + switch (op->op) { + case CPU_COMPARE_EQ_OP: + case CPU_COMPARE_NE_OP: + ret = cpu_op_count_pages(op->u.compare_op.a, op->len); + if (ret < 0) + return ret; + *nr_pages += ret; + ret = cpu_op_count_pages(op->u.compare_op.b, op->len); + if (ret < 0) + return ret; + *nr_pages += ret; + break; + case CPU_MEMCPY_OP: + ret = cpu_op_count_pages(op->u.memcpy_op.dst, op->len); + if (ret < 0) + return ret; + *nr_pages += ret; + ret = cpu_op_count_pages(op->u.memcpy_op.src, op->len); + if (ret < 0) + return ret; + *nr_pages += ret; + break; + case CPU_ADD_OP: + ret = cpu_op_count_pages(op->u.arithmetic_op.p, op->len); + if (ret < 0) + return ret; + *nr_pages += ret; + break; + case CPU_OR_OP: + case CPU_AND_OP: + case CPU_XOR_OP: + ret = cpu_op_count_pages(op->u.bitwise_op.p, op->len); + if (ret < 0) + return ret; + *nr_pages += ret; + break; + case CPU_LSHIFT_OP: + case CPU_RSHIFT_OP: + ret = cpu_op_count_pages(op->u.shift_op.p, op->len); + if (ret < 0) + return ret; + *nr_pages += ret; + break; + case CPU_MB_OP: + break; + default: + return -EINVAL; + } + return 0; +} + +/* + * Check operation types and length parameters. Count number of pages. + */ +static int cpu_opv_check(struct cpu_op *cpuopv, int cpuopcnt, int *nr_pages) +{ + uint32_t sum = 0; + int i, ret; + + for (i = 0; i < cpuopcnt; i++) { + ret = cpu_opv_check_op(&cpuopv[i], nr_pages, &sum); + if (ret) + return ret; + } + if (sum > CPU_OP_VEC_DATA_LEN_MAX) + return -EINVAL; + return 0; +} + +/** + * fault_in_user_writeable() - Fault in user address and verify RW access + * @uaddr: pointer to faulting user space address + */ +static int fault_in_user_writeable(unsigned long uaddr) +{ + struct mm_struct *mm = current->mm; + int ret; + + down_read(&mm->mmap_sem); + ret = fixup_user_fault(current, mm, uaddr, + FAULT_FLAG_WRITE, NULL); + up_read(&mm->mmap_sem); + + return ret < 0 ? ret : 0; +} + +/* + * Refusing device pages, the zero page, pages in the gate area, and + * special mappings. Handle page swapping through retry. Fault in the page if + * needed. + */ +static int cpu_op_check_page(struct page *page, unsigned long addr) +{ + struct address_space *mapping; + + if (is_zone_device_page(page)) + return -EFAULT; + + /* + * The page lock protects many things but in this context the page + * lock stabilizes mapping, prevents inode freeing in the shared + * file-backed region case and guards against movement to swap + * cache. + * + * Strictly speaking the page lock is not needed in all cases being + * considered here and page lock forces unnecessarily serialization + * From this point on, mapping will be re-verified if necessary and + * page lock will be acquired only if it is unavoidable + * + * Mapping checks require the head page for any compound page so the + * head page and mapping is looked up now. + */ + page = compound_head(page); + mapping = READ_ONCE(page->mapping); + + /* + * If page->mapping is NULL, then it cannot be a PageAnon + * page; but it might be the ZERO_PAGE or in the gate area or + * in a special mapping (all cases which we are happy to fail); + * or it may have been a good file page when get_user_pages_fast + * found it, but truncated or holepunched or subjected to + * invalidate_complete_page2 before we got the page lock (also + * cases which we are happy to fail). And we hold a reference, + * so refcount care in invalidate_complete_page's remove_mapping + * prevents drop_caches from setting mapping to NULL beneath us. + * + * The case we do have to guard against is when memory pressure made + * shmem_writepage move it from filecache to swapcache beneath us: + * an unlikely race, but we do need to retry for page->mapping. + */ + if (!mapping) { + int shmem_swizzled, ret; + + /* + * Check again with page lock held to guard against + * memory pressure making shmem_writepage move the page + * from filecache to swapcache. + */ + lock_page(page); + shmem_swizzled = PageSwapCache(page) || page->mapping; + unlock_page(page); + if (shmem_swizzled) + return -EAGAIN; + /* + * Page needs to be faulted-in. If it succeeds, return + * -EAGAIN to retry. + */ + ret = fault_in_user_writeable(addr); + if (!ret) + return -EAGAIN; + return ret; + } + return 0; +} + +static int cpu_op_check_pages(struct page **pages, + unsigned long nr_pages, + unsigned long addr) +{ + unsigned long i; + + for (i = 0; i < nr_pages; i++) { + int ret; + + ret = cpu_op_check_page(pages[i], addr); + if (ret) + return ret; + addr += PAGE_SIZE; + } + return 0; +} + +static int cpu_op_pin_pages(unsigned long addr, unsigned long len, + struct cpu_opv_page_ptrs *page_ptrs, + int write) +{ + struct page *pages[2]; + int ret, nr_pages, nr_put_pages, n; + + nr_pages = cpu_op_count_pages(addr, len); + if (!nr_pages) + return 0; +again: + ret = get_user_pages_fast(addr, nr_pages, write, pages); + if (ret < nr_pages) { + if (ret >= 0) { + nr_put_pages = ret; + ret = -EFAULT; + } else { + nr_put_pages = 0; + } + goto error; + } + ret = cpu_op_check_pages(pages, nr_pages, addr); + if (ret) { + nr_put_pages = nr_pages; + goto error; + } + for (n = 0; n < nr_pages; n++) + page_ptrs->pages[page_ptrs->nr++] = pages[n]; + return 0; + +error: + for (n = 0; n < nr_put_pages; n++) + put_page(pages[n]); + /* + * Retry if a page has been faulted in, or is being swapped in. + */ + if (ret == -EAGAIN) + goto again; + return ret; +} + +static int cpu_opv_pin_pages_op(struct cpu_op *op, + struct cpu_opv_page_ptrs *page_ptrs, + bool *expect_fault) +{ + int ret; + + switch (op->op) { + case CPU_COMPARE_EQ_OP: + case CPU_COMPARE_NE_OP: + ret = -EFAULT; + *expect_fault = op->u.compare_op.expect_fault_a; + if (!access_ok(VERIFY_READ, + (void __user *)op->u.compare_op.a, + op->len)) + return ret; + ret = cpu_op_pin_pages(op->u.compare_op.a, op->len, + page_ptrs, 0); + if (ret) + return ret; + ret = -EFAULT; + *expect_fault = op->u.compare_op.expect_fault_b; + if (!access_ok(VERIFY_READ, + (void __user *)op->u.compare_op.b, + op->len)) + return ret; + ret = cpu_op_pin_pages(op->u.compare_op.b, op->len, + page_ptrs, 0); + if (ret) + return ret; + break; + case CPU_MEMCPY_OP: + ret = -EFAULT; + *expect_fault = op->u.memcpy_op.expect_fault_dst; + if (!access_ok(VERIFY_WRITE, + (void __user *)op->u.memcpy_op.dst, + op->len)) + return ret; + ret = cpu_op_pin_pages(op->u.memcpy_op.dst, op->len, + page_ptrs, 1); + if (ret) + return ret; + ret = -EFAULT; + *expect_fault = op->u.memcpy_op.expect_fault_src; + if (!access_ok(VERIFY_READ, + (void __user *)op->u.memcpy_op.src, + op->len)) + return ret; + ret = cpu_op_pin_pages(op->u.memcpy_op.src, op->len, + page_ptrs, 0); + if (ret) + return ret; + break; + case CPU_ADD_OP: + ret = -EFAULT; + *expect_fault = op->u.arithmetic_op.expect_fault_p; + if (!access_ok(VERIFY_WRITE, + (void __user *)op->u.arithmetic_op.p, + op->len)) + return ret; + ret = cpu_op_pin_pages(op->u.arithmetic_op.p, op->len, + page_ptrs, 1); + if (ret) + return ret; + break; + case CPU_OR_OP: + case CPU_AND_OP: + case CPU_XOR_OP: + ret = -EFAULT; + *expect_fault = op->u.bitwise_op.expect_fault_p; + if (!access_ok(VERIFY_WRITE, + (void __user *)op->u.bitwise_op.p, + op->len)) + return ret; + ret = cpu_op_pin_pages(op->u.bitwise_op.p, op->len, + page_ptrs, 1); + if (ret) + return ret; + break; + case CPU_LSHIFT_OP: + case CPU_RSHIFT_OP: + ret = -EFAULT; + *expect_fault = op->u.shift_op.expect_fault_p; + if (!access_ok(VERIFY_WRITE, + (void __user *)op->u.shift_op.p, + op->len)) + return ret; + ret = cpu_op_pin_pages(op->u.shift_op.p, op->len, + page_ptrs, 1); + if (ret) + return ret; + break; + case CPU_MB_OP: + break; + default: + return -EINVAL; + } + return 0; +} + +static int cpu_opv_pin_pages(struct cpu_op *cpuop, int cpuopcnt, + struct cpu_opv_page_ptrs *page_ptrs) +{ + int ret, i; + bool expect_fault = false; + + /* Check access, pin pages. */ + for (i = 0; i < cpuopcnt; i++) { + ret = cpu_opv_pin_pages_op(&cpuop[i], page_ptrs, + &expect_fault); + if (ret) + goto error; + } + return 0; + +error: + /* + * If faulting access is expected, return EAGAIN to user-space. + * It allows user-space to distinguish between a fault caused by + * an access which is expect to fault (e.g. due to concurrent + * unmapping of underlying memory) from an unexpected fault from + * which a retry would not recover. + */ + if (ret == -EFAULT && expect_fault) + return -EAGAIN; + return ret; +} + +static int __op_get_user(union op_fn_data *data, void __user *p, size_t len) +{ + switch (len) { + case 1: return __get_user(data->_u8, (uint8_t __user *)p); + case 2: return __get_user(data->_u16, (uint16_t __user *)p); + case 4: return __get_user(data->_u32, (uint32_t __user *)p); + case 8: +#if (BITS_PER_LONG == 64) + return __get_user(data->_u64, (uint64_t __user *)p); +#else + { + int ret; + + ret = __get_user(data->_u64_split[0], + (uint32_t __user *)p); + if (ret) + return ret; + return __get_user(data->_u64_split[1], + (uint32_t __user *)p + 1); + } +#endif + default: + return -EINVAL; + } +} + +static int __op_put_user(union op_fn_data *data, void __user *p, size_t len) +{ + switch (len) { + case 1: return __put_user(data->_u8, (uint8_t __user *)p); + case 2: return __put_user(data->_u16, (uint16_t __user *)p); + case 4: return __put_user(data->_u32, (uint32_t __user *)p); + case 8: +#if (BITS_PER_LONG == 64) + return __put_user(data->_u64, (uint64_t __user *)p); +#else + { + int ret; + + ret = __put_user(data->_u64_split[0], + (uint32_t __user *)p); + if (ret) + return ret; + return __put_user(data->_u64_split[1], + (uint32_t __user *)p + 1); + } +#endif + default: + return -EINVAL; + } +} + +/* Return 0 if same, > 0 if different, < 0 on error. */ +static int do_cpu_op_compare_iter(void __user *a, void __user *b, uint32_t len) +{ + char bufa[TMP_BUFLEN], bufb[TMP_BUFLEN]; + uint32_t compared = 0; + + while (compared != len) { + unsigned long to_compare; + + to_compare = min_t(uint32_t, TMP_BUFLEN, len - compared); + if (__copy_from_user_inatomic(bufa, a + compared, to_compare)) + return -EFAULT; + if (__copy_from_user_inatomic(bufb, b + compared, to_compare)) + return -EFAULT; + if (memcmp(bufa, bufb, to_compare)) + return 1; + compared += to_compare; + } + return 0; +} + +/* Return 0 if same, > 0 if different, < 0 on error. */ +static int do_cpu_op_compare(unsigned long _a, unsigned long _b, uint32_t len) +{ + void __user *a = (void __user *)_a; + void __user *b = (void __user *)_b; + int ret = -EFAULT; + union op_fn_data tmp[2]; + + switch (len) { + case 1: + case 2: + case 4: + case 8: + break; + default: + return do_cpu_op_compare_iter(a, b, len); + } + + pagefault_disable(); + + if (__op_get_user(&tmp[0], a, len)) + goto end; + if (__op_get_user(&tmp[1], b, len)) + goto end; + + switch (len) { + case 1: + ret = !!(tmp[0]._u8 != tmp[1]._u8); + break; + case 2: + ret = !!(tmp[0]._u16 != tmp[1]._u16); + break; + case 4: + ret = !!(tmp[0]._u32 != tmp[1]._u32); + break; + case 8: + ret = !!(tmp[0]._u64 != tmp[1]._u64); + break; + default: + break; + } +end: + pagefault_enable(); + return ret; +} + +/* Return 0 on success, < 0 on error. */ +static int do_cpu_op_memcpy_iter(void __user *dst, void __user *src, + uint32_t len) +{ + char buf[TMP_BUFLEN]; + uint32_t copied = 0; + + while (copied != len) { + unsigned long to_copy; + + to_copy = min_t(uint32_t, TMP_BUFLEN, len - copied); + if (__copy_from_user_inatomic(buf, src + copied, to_copy)) + return -EFAULT; + if (__copy_to_user_inatomic(dst + copied, buf, to_copy)) + return -EFAULT; + copied += to_copy; + } + return 0; +} + +/* Return 0 on success, < 0 on error. */ +static int do_cpu_op_memcpy(unsigned long _dst, unsigned long _src, + uint32_t len) +{ + void __user *dst = (void __user *)_dst; + void __user *src = (void __user *)_src; + int ret = -EFAULT; + union op_fn_data tmp; + + switch (len) { + case 1: + case 2: + case 4: + case 8: + break; + default: + return do_cpu_op_memcpy_iter(dst, src, len); + } + + pagefault_disable(); + + if (__op_get_user(&tmp, src, len)) + goto end; + if (__op_put_user(&tmp, dst, len)) + goto end; + ret = 0; +end: + pagefault_enable(); + return ret; +} + +static int op_add_fn(union op_fn_data *data, uint64_t count, uint32_t len) +{ + int ret = 0; + + switch (len) { + case 1: + data->_u8 += (uint8_t)count; + break; + case 2: + data->_u16 += (uint16_t)count; + break; + case 4: + data->_u32 += (uint32_t)count; + break; + case 8: + data->_u64 += (uint64_t)count; + break; + default: + ret = -EINVAL; + break; + } + return ret; +} + +static int op_or_fn(union op_fn_data *data, uint64_t mask, uint32_t len) +{ + int ret = 0; + + switch (len) { + case 1: + data->_u8 |= (uint8_t)mask; + break; + case 2: + data->_u16 |= (uint16_t)mask; + break; + case 4: + data->_u32 |= (uint32_t)mask; + break; + case 8: + data->_u64 |= (uint64_t)mask; + break; + default: + ret = -EINVAL; + break; + } + return ret; +} + +static int op_and_fn(union op_fn_data *data, uint64_t mask, uint32_t len) +{ + int ret = 0; + + switch (len) { + case 1: + data->_u8 &= (uint8_t)mask; + break; + case 2: + data->_u16 &= (uint16_t)mask; + break; + case 4: + data->_u32 &= (uint32_t)mask; + break; + case 8: + data->_u64 &= (uint64_t)mask; + break; + default: + ret = -EINVAL; + break; + } + return ret; +} + +static int op_xor_fn(union op_fn_data *data, uint64_t mask, uint32_t len) +{ + int ret = 0; + + switch (len) { + case 1: + data->_u8 ^= (uint8_t)mask; + break; + case 2: + data->_u16 ^= (uint16_t)mask; + break; + case 4: + data->_u32 ^= (uint32_t)mask; + break; + case 8: + data->_u64 ^= (uint64_t)mask; + break; + default: + ret = -EINVAL; + break; + } + return ret; +} + +static int op_lshift_fn(union op_fn_data *data, uint64_t bits, uint32_t len) +{ + int ret = 0; + + switch (len) { + case 1: + data->_u8 <<= (uint8_t)bits; + break; + case 2: + data->_u16 <<= (uint16_t)bits; + break; + case 4: + data->_u32 <<= (uint32_t)bits; + break; + case 8: + data->_u64 <<= (uint64_t)bits; + break; + default: + ret = -EINVAL; + break; + } + return ret; +} + +static int op_rshift_fn(union op_fn_data *data, uint64_t bits, uint32_t len) +{ + int ret = 0; + + switch (len) { + case 1: + data->_u8 >>= (uint8_t)bits; + break; + case 2: + data->_u16 >>= (uint16_t)bits; + break; + case 4: + data->_u32 >>= (uint32_t)bits; + break; + case 8: + data->_u64 >>= (uint64_t)bits; + break; + default: + ret = -EINVAL; + break; + } + return ret; +} + +/* Return 0 on success, < 0 on error. */ +static int do_cpu_op_fn(op_fn_t op_fn, unsigned long _p, uint64_t v, + uint32_t len) +{ + union op_fn_data tmp; + void __user *p = (void __user *)_p; + int ret = -EFAULT; + + pagefault_disable(); + if (__op_get_user(&tmp, p, len)) + goto end; + if (op_fn(&tmp, v, len)) + goto end; + if (__op_put_user(&tmp, p, len)) + goto end; + ret = 0; +end: + pagefault_enable(); + return ret; +} + +/* + * Return negative value on error, positive value if comparison + * fails, 0 on success. + */ +static int __do_cpu_opv_op(struct cpu_op *op) +{ + int ret; + + /* Guarantee a compiler barrier between each operation. */ + barrier(); + + switch (op->op) { + case CPU_COMPARE_EQ_OP: + ret = do_cpu_op_compare(op->u.compare_op.a, + op->u.compare_op.b, + op->len); + if (ret) + return ret; + break; + case CPU_COMPARE_NE_OP: + ret = do_cpu_op_compare(op->u.compare_op.a, + op->u.compare_op.b, + op->len); + if (ret < 0) + return ret; + /* + * Stop execution, return positive value if comparison + * is identical. + */ + if (ret == 0) + return 1; + break; + case CPU_MEMCPY_OP: + ret = do_cpu_op_memcpy(op->u.memcpy_op.dst, + op->u.memcpy_op.src, + op->len); + if (ret) + return ret; + break; + case CPU_ADD_OP: + ret = do_cpu_op_fn(op_add_fn, op->u.arithmetic_op.p, + op->u.arithmetic_op.count, op->len); + if (ret) + return ret; + break; + case CPU_OR_OP: + ret = do_cpu_op_fn(op_or_fn, op->u.bitwise_op.p, + op->u.bitwise_op.mask, op->len); + if (ret) + return ret; + break; + case CPU_AND_OP: + ret = do_cpu_op_fn(op_and_fn, op->u.bitwise_op.p, + op->u.bitwise_op.mask, op->len); + if (ret) + return ret; + break; + case CPU_XOR_OP: + ret = do_cpu_op_fn(op_xor_fn, op->u.bitwise_op.p, + op->u.bitwise_op.mask, op->len); + if (ret) + return ret; + break; + case CPU_LSHIFT_OP: + ret = do_cpu_op_fn(op_lshift_fn, op->u.shift_op.p, + op->u.shift_op.bits, op->len); + if (ret) + return ret; + break; + case CPU_RSHIFT_OP: + ret = do_cpu_op_fn(op_rshift_fn, op->u.shift_op.p, + op->u.shift_op.bits, op->len); + if (ret) + return ret; + break; + case CPU_MB_OP: + /* Memory barrier provided by this operation. */ + smp_mb(); + break; + default: + return -EINVAL; + } + return 0; +} + +static int __do_cpu_opv(struct cpu_op *cpuop, int cpuopcnt) +{ + int i, ret; + + for (i = 0; i < cpuopcnt; i++) { + ret = __do_cpu_opv_op(&cpuop[i]); + /* If comparison fails, stop execution and return index + 1. */ + if (ret > 0) + return i + 1; + /* On error, stop execution. */ + if (ret < 0) + return ret; + } + return 0; +} + +static int do_cpu_opv(struct cpu_op *cpuop, int cpuopcnt, int cpu) +{ + int ret; + +retry: + if (cpu != raw_smp_processor_id()) { + ret = push_task_to_cpu(current, cpu); + if (ret) + goto check_online; + } + preempt_disable(); + if (cpu != smp_processor_id()) { + preempt_enable(); + goto retry; + } + ret = __do_cpu_opv(cpuop, cpuopcnt); + preempt_enable(); + return ret; + +check_online: + if (!cpu_possible(cpu)) + return -EINVAL; + get_online_cpus(); + if (cpu_online(cpu)) { + put_online_cpus(); + goto retry; + } + /* + * CPU is offline. Perform operation from the current CPU with + * cpu_online read lock held, preventing that CPU from coming online, + * and with mutex held, providing mutual exclusion against other + * CPUs also finding out about an offline CPU. + */ + mutex_lock(&cpu_opv_offline_lock); + ret = __do_cpu_opv(cpuop, cpuopcnt); + mutex_unlock(&cpu_opv_offline_lock); + put_online_cpus(); + return ret; +} + +/* + * cpu_opv - execute operation vector on a given CPU with preempt off. + * + * Userspace should pass current CPU number as parameter. + */ +SYSCALL_DEFINE4(cpu_opv, struct cpu_op __user *, ucpuopv, int, cpuopcnt, + int, cpu, int, flags) +{ + struct cpu_op cpuopv[CPU_OP_VEC_LEN_MAX]; + struct page *page_ptrs_on_stack[NR_PAGE_PTRS_ON_STACK]; + struct cpu_opv_page_ptrs page_ptrs = { + .pages = page_ptrs_on_stack, + .nr = 0, + .is_kmalloc = false, + }; + int ret, i, nr_pages = 0; + + if (unlikely(flags)) + return -EINVAL; + if (unlikely(cpu < 0)) + return -EINVAL; + if (cpuopcnt < 0 || cpuopcnt > CPU_OP_VEC_LEN_MAX) + return -EINVAL; + if (copy_from_user(cpuopv, ucpuopv, cpuopcnt * sizeof(struct cpu_op))) + return -EFAULT; + ret = cpu_opv_check(cpuopv, cpuopcnt, &nr_pages); + if (ret) + return ret; + if (nr_pages > NR_PAGE_PTRS_ON_STACK) { + page_ptrs.pages = cpu_op_alloc_pages_vector(nr_pages); + if (!page_ptrs.pages) + return -ENOMEM; + page_ptrs.is_kmalloc = true; + } + ret = cpu_opv_pin_pages(cpuopv, cpuopcnt, &page_ptrs); + if (ret) + goto end; + ret = do_cpu_opv(cpuopv, cpuopcnt, cpu); +end: + for (i = 0; i < page_ptrs.nr; i++) + put_page(page_ptrs.pages[i]); + if (page_ptrs.is_kmalloc) + kfree(page_ptrs.pages); + return ret; +} diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index bfa1ee1bf669..59e622296dc3 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -262,3 +262,4 @@ cond_syscall(sys_pkey_free); /* restartable sequence */ cond_syscall(sys_rseq); +cond_syscall(sys_cpu_opv); -- 2.11.0 -- To unsubscribe from this list: send the line "unsubscribe linux-api" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html