On 04.04.24 14:31, Alice Ryhl wrote: > Adds a new struct called `Page` that wraps a pointer to `struct page`. > This struct is assumed to hold ownership over the page, so that Rust > code can allocate and manage pages directly. > > The page type has various methods for reading and writing into the page. > These methods will temporarily map the page to allow the operation. All > of these methods use a helper that takes an offset and length, performs > bounds checks, and returns a pointer to the given offset in the page. > > This patch only adds support for pages of order zero, as that is all > Rust Binder needs. However, it is written to make it easy to add support > for higher-order pages in the future. To do that, you would add a const > generic parameter to `Page` that specifies the order. Most of the > methods do not need to be adjusted, as the logic for dealing with > mapping multiple pages at once can be isolated to just the > `with_pointer_into_page` method. Finally, the struct can be renamed to > `Pages<ORDER>`, and the type alias `Page = Pages<0>` can be introduced. This part seems outdated, I think we probably make `ORDER` default to 0. > > Rust Binder needs to manage pages directly as that is how transactions > are delivered: Each process has an mmap'd region for incoming > transactions. When an incoming transaction arrives, the Binder driver > will choose a region in the mmap, allocate and map the relevant pages > manually, and copy the incoming transaction directly into the page. This > architecture allows the driver to copy transactions directly from the > address space of one process to another, without an intermediate copy > to a kernel buffer. [...] > diff --git a/rust/kernel/page.rs b/rust/kernel/page.rs > new file mode 100644 > index 000000000000..5aba0261242d > --- /dev/null > +++ b/rust/kernel/page.rs > @@ -0,0 +1,259 @@ > +// SPDX-License-Identifier: GPL-2.0 > + > +//! Kernel page allocation and management. > + > +use crate::{bindings, error::code::*, error::Result, uaccess::UserSliceReader}; > +use core::{ > + alloc::AllocError, > + ptr::{self, NonNull}, > +}; > + > +/// A bitwise shift for the page size. > +#[allow(clippy::unnecessary_cast)] Why can't you remove the cast? > +pub const PAGE_SHIFT: usize = bindings::PAGE_SHIFT as usize; > + > +/// The number of bytes in a page. > +#[allow(clippy::unnecessary_cast)] > +pub const PAGE_SIZE: usize = bindings::PAGE_SIZE as usize; > + > +/// A bitmask that gives the page containing a given address. > +pub const PAGE_MASK: usize = !(PAGE_SIZE-1); This line doesn't seem to be correctly formatted. > + > +/// Flags for the "get free page" function that underlies all memory allocations. > +pub mod flags { > + /// gfp flags. > + #[allow(non_camel_case_types)] > + pub type gfp_t = bindings::gfp_t; > + > + /// `GFP_KERNEL` is typical for kernel-internal allocations. The caller requires `ZONE_NORMAL` > + /// or a lower zone for direct access but can direct reclaim. > + pub const GFP_KERNEL: gfp_t = bindings::GFP_KERNEL; > + /// `GFP_ZERO` returns a zeroed page on success. > + pub const __GFP_ZERO: gfp_t = bindings::__GFP_ZERO; > + /// `GFP_HIGHMEM` indicates that the allocated memory may be located in high memory. > + pub const __GFP_HIGHMEM: gfp_t = bindings::__GFP_HIGHMEM; > +} > + > +/// A pointer to a page that owns the page allocation. > +/// > +/// # Invariants > +/// > +/// The pointer is valid, and has ownership over the page. > +pub struct Page { > + page: NonNull<bindings::page>, > +} > + > +// SAFETY: Pages have no logic that relies on them staying on a given thread, so > +// moving them across threads is safe. > +unsafe impl Send for Page {} > + > +// SAFETY: Pages have no logic that relies on them not being accessed > +// concurrently, so accessing them concurrently is safe. > +unsafe impl Sync for Page {} > + > +impl Page { > + /// Allocates a new page. > + pub fn alloc_page(gfp_flags: flags::gfp_t) -> Result<Self, AllocError> { > + // SAFETY: Depending on the value of `gfp_flags`, this call may sleep. > + // Other than that, it is always safe to call this method. > + let page = unsafe { bindings::alloc_pages(gfp_flags, 0) }; > + let page = NonNull::new(page).ok_or(AllocError)?; > + // INVARIANT: We just successfully allocated a page, so we now have > + // ownership of the newly allocated page. We transfer that ownership to > + // the new `Page` object. > + Ok(Self { page }) > + } > + > + /// Returns a raw pointer to the page. > + pub fn as_ptr(&self) -> *mut bindings::page { > + self.page.as_ptr() > + } > + > + /// Runs a piece of code with this page mapped to an address. > + /// > + /// The page is unmapped when this call returns. > + /// > + /// # Using the raw pointer > + /// > + /// It is up to the caller to use the provided raw pointer correctly. The > + /// pointer is valid for `PAGE_SIZE` bytes and for the duration in which the > + /// closure is called. The pointer might only be mapped on the current > + /// thread, and when that is the case, dereferencing it on other threads is > + /// UB. Other than that, the usual rules for dereferencing a raw pointer > + /// apply: don't cause data races, the memory may be uninitialized, and so > + /// on. > + /// > + /// If multiple threads map the same page at the same time, then they may > + /// reference with different addresses. However, even if the addresses are > + /// different, the underlying memory is still the same for these purposes > + /// (e.g., it's still a data race if they both write to the same underlying > + /// byte at the same time). This is nice. -- Cheers, Benno > + fn with_page_mapped<T>(&self, f: impl FnOnce(*mut u8) -> T) -> T { > + // SAFETY: `page` is valid due to the type invariants on `Page`. > + let mapped_addr = unsafe { bindings::kmap_local_page(self.as_ptr()) }; > + > + let res = f(mapped_addr.cast()); > + > + // This unmaps the page mapped above. > + // > + // SAFETY: Since this API takes the user code as a closure, it can only > + // be used in a manner where the pages are unmapped in reverse order. > + // This is as required by `kunmap_local`. > + // > + // In other words, if this call to `kunmap_local` happens when a > + // different page should be unmapped first, then there must necessarily > + // be a call to `kmap_local_page` other than the call just above in > + // `with_page_mapped` that made that possible. In this case, it is the > + // unsafe block that wraps that other call that is incorrect. > + unsafe { bindings::kunmap_local(mapped_addr) }; > + > + res > + }