On Wed, Aug 14, 2024 at 04:32:34PM +0000, Benno Lossin wrote: > On 12.08.24 20:22, Danilo Krummrich wrote: > > Implement `Allocator` for `Vmalloc`, the kernel's virtually contiguous > > allocator, typically used for larger objects, (much) larger than page > > size. > > > > All memory allocations made with `Vmalloc` end up in `vrealloc()`. > > > > Reviewed-by: Alice Ryhl <aliceryhl@xxxxxxxxxx> > > Signed-off-by: Danilo Krummrich <dakr@xxxxxxxxxx> > > --- > > rust/helpers.c | 7 +++++++ > > rust/kernel/alloc/allocator.rs | 28 ++++++++++++++++++++++++++++ > > rust/kernel/alloc/allocator_test.rs | 1 + > > 3 files changed, 36 insertions(+) > > > > diff --git a/rust/helpers.c b/rust/helpers.c > > index 9f7275493365..7406943f887d 100644 > > --- a/rust/helpers.c > > +++ b/rust/helpers.c > > @@ -33,6 +33,7 @@ > > #include <linux/sched/signal.h> > > #include <linux/slab.h> > > #include <linux/spinlock.h> > > +#include <linux/vmalloc.h> > > #include <linux/wait.h> > > #include <linux/workqueue.h> > > > > @@ -199,6 +200,12 @@ void *rust_helper_krealloc(const void *objp, size_t new_size, gfp_t flags) > > } > > EXPORT_SYMBOL_GPL(rust_helper_krealloc); > > > > +void *rust_helper_vrealloc(const void *p, size_t size, gfp_t flags) > > +{ > > + return vrealloc(p, size, flags); > > +} > > +EXPORT_SYMBOL_GPL(rust_helper_vrealloc); > > + > > /* > > * `bindgen` binds the C `size_t` type as the Rust `usize` type, so we can > > * use it in contexts where Rust expects a `usize` like slice (array) indices. > > diff --git a/rust/kernel/alloc/allocator.rs b/rust/kernel/alloc/allocator.rs > > index b46883d87715..fdda22c6983f 100644 > > --- a/rust/kernel/alloc/allocator.rs > > +++ b/rust/kernel/alloc/allocator.rs > > @@ -9,6 +9,7 @@ > > > > use crate::alloc::{AllocError, Allocator}; > > use crate::bindings; > > +use crate::pr_warn; > > > > /// The contiguous kernel allocator. > > /// > > @@ -16,6 +17,12 @@ > > /// `bindings::krealloc`. > > pub struct Kmalloc; > > > > +/// The virtually contiguous kernel allocator. > > +/// > > +/// The vmalloc allocator allocates pages from the page level allocator and maps them into the > > +/// contiguous kernel virtual space. > > +pub struct Vmalloc; > > + > > /// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment. > > fn aligned_size(new_layout: Layout) -> usize { > > // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first. > > @@ -55,6 +62,9 @@ impl ReallocFunc { > > // INVARIANT: `krealloc` satisfies the type invariants. > > const KREALLOC: Self = Self(bindings::krealloc); > > > > + // INVARIANT: `vrealloc` satisfies the type invariants. > > + const VREALLOC: Self = Self(bindings::vrealloc); > > + > > /// # Safety > > /// > > /// This method has the same safety requirements as `Allocator::realloc`. > > @@ -132,6 +142,24 @@ unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut u8 { > > } > > } > > > > +unsafe impl Allocator for Vmalloc { > > Missing SAFETY comment. > > > + unsafe fn realloc( > > Does this need `#[inline]`? Given that we almost only call `ReallocFunc::VREALLOC.call`, inlining this seems reasonable. > > > + ptr: Option<NonNull<u8>>, > > + layout: Layout, > > + flags: Flags, > > + ) -> Result<NonNull<[u8]>, AllocError> { > > + // TODO: Support alignments larger than PAGE_SIZE. > > + if layout.align() > bindings::PAGE_SIZE { > > + pr_warn!("Vmalloc does not support alignments larger than PAGE_SIZE yet.\n"); > > + return Err(AllocError); > > I think here we should first try to use `build_error!`, most often the > alignment will be specified statically, so it should get optimized away. Sure, we can try that first. > > How difficult will it be to support this? (it is a weird requirement, > but I dislike just returning an error...) It's not difficult to support at all. But it requires a C API taking an alignment argument (same for `KVmalloc`). Coming up with a vrealloc_aligned() is rather trivial. kvrealloc_aligned() would be a bit weird though, because the alignment argument could only be really honored if we run into the vrealloc() case. For the krealloc() case it'd still depend on the bucket size that is selected for the requested size. Adding the C API, I'm also pretty sure someone's gonna ask what we need an alignment larger than PAGE_SIZE for and if we have a real use case for that. I'm not entirely sure we have a reasonable answer for that. I got some hacked up patches for that, but I'd rather polish and send them once we actually need it. > > --- > Cheers, > Benno > > > + } > > + > > + // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously > > + // allocated with this `Allocator`. > > + unsafe { ReallocFunc::VREALLOC.call(ptr, layout, flags) } > > + } > > +} > > + > > #[global_allocator] > > static ALLOCATOR: Kmalloc = Kmalloc; > > > > diff --git a/rust/kernel/alloc/allocator_test.rs b/rust/kernel/alloc/allocator_test.rs > > index 4785efc474a7..e7bf2982f68f 100644 > > --- a/rust/kernel/alloc/allocator_test.rs > > +++ b/rust/kernel/alloc/allocator_test.rs > > @@ -7,6 +7,7 @@ > > use core::ptr::NonNull; > > > > pub struct Kmalloc; > > +pub type Vmalloc = Kmalloc; > > > > unsafe impl Allocator for Kmalloc { > > unsafe fn realloc( > > -- > > 2.45.2 > > >