On Tue, Jul 30, 2024 at 02:15:34PM +0200, Vlastimil Babka wrote: > On 7/30/24 3:35 AM, Danilo Krummrich wrote: > > On Mon, Jul 29, 2024 at 09:08:16PM +0200, Danilo Krummrich wrote: > >> On Fri, Jul 26, 2024 at 10:05:47PM +0200, Danilo Krummrich wrote: > >>> On Fri, Jul 26, 2024 at 04:37:43PM +0200, Vlastimil Babka wrote: > >>>> On 7/22/24 6:29 PM, Danilo Krummrich wrote: > >>>>> Implement vrealloc() analogous to krealloc(). > >>>>> > >>>>> Currently, krealloc() requires the caller to pass the size of the > >>>>> previous memory allocation, which, instead, should be self-contained. > >>>>> > >>>>> We attempt to fix this in a subsequent patch which, in order to do so, > >>>>> requires vrealloc(). > >>>>> > >>>>> Besides that, we need realloc() functions for kernel allocators in Rust > >>>>> too. With `Vec` or `KVec` respectively, potentially growing (and > >>>>> shrinking) data structures are rather common. > >>>>> > >>>>> Signed-off-by: Danilo Krummrich <dakr@xxxxxxxxxx> > >>>> > >>>> Acked-by: Vlastimil Babka <vbabka@xxxxxxx> > >>>> > >>>>> --- a/mm/vmalloc.c > >>>>> +++ b/mm/vmalloc.c > >>>>> @@ -4037,6 +4037,65 @@ void *vzalloc_node_noprof(unsigned long size, int node) > >>>>> } > >>>>> EXPORT_SYMBOL(vzalloc_node_noprof); > >>>>> > >>>>> +/** > >>>>> + * vrealloc - reallocate virtually contiguous memory; contents remain unchanged > >>>>> + * @p: object to reallocate memory for > >>>>> + * @size: the size to reallocate > >>>>> + * @flags: the flags for the page level allocator > >>>>> + * > >>>>> + * The contents of the object pointed to are preserved up to the lesser of the > >>>>> + * new and old size (__GFP_ZERO flag is effectively ignored). > >>>> > >>>> Well, technically not correct as we don't shrink. Get 8 pages, kvrealloc to > >>>> 4 pages, kvrealloc back to 8 and the last 4 are not zeroed. But it's not > >>>> new, kvrealloc() did the same before patch 2/2. > >>> > >>> Taking it (too) literal, it's not wrong. The contents of the object pointed to > >>> are indeed preserved up to the lesser of the new and old size. It's just that > >>> the rest may be "preserved" as well. > >>> > >>> I work on implementing shrink and grow for vrealloc(). In the meantime I think > >>> we could probably just memset() spare memory to zero. > >> > >> Probably, this was a bad idea. Even with shrinking implemented we'd need to > >> memset() potential spare memory of the last page to zero, when new_size < > >> old_size. > >> > >> Analogously, the same would be true for krealloc() buckets. That's probably not > >> worth it. > > I think it could remove unexpected bad surprises with the API so why not > do it. We'd either need to do it *every* time we shrink an allocation on spec, or we only do it when shrinking with __GFP_ZERO flag set, which might be a bit counter-intuitive. If we do it, I'd probably vote for the latter semantics. While it sounds more error prone, it's less wasteful and enough to cover the most common case where the actual *realloc() call is always with the same parameters, but a changing size. > > >> I think we should indeed just document that __GFP_ZERO doesn't work for > >> re-allocating memory and start to warn about it. As already mentioned, I think > >> we should at least gurantee that *realloc(NULL, size, flags | __GFP_ZERO) is > >> valid, i.e. WARN_ON(p && flags & __GFP_ZERO). > > > > Maybe I spoke a bit to soon with this last paragraph. I think continuously > > gowing something with __GFP_ZERO is a legitimate use case. I just did a quick > > grep for users of krealloc() with __GFP_ZERO and found 18 matches. > > > > So, I think, at least for now, we should instead document that __GFP_ZERO is > > only fully honored when the buffer is grown continuously (without intermediate > > shrinking) and __GFP_ZERO is supplied in every iteration. > > > > In case I miss something here, and not even this case is safe, it looks like > > we have 18 broken users of krealloc(). > > +CC Feng Tang > > Let's say we kmalloc(56, __GFP_ZERO), we get an object from kmalloc-64 > cache. Since commit 946fa0dbf2d89 ("mm/slub: extend redzone check to > extra allocated kmalloc space than requested") and preceding commits, if > slub_debug is enabled (red zoning or user tracking), only the 56 bytes > will be zeroed. The rest will be either unknown garbage, or redzone. > > Then we might e.g. krealloc(120) and get a kmalloc-128 object and 64 > bytes (result of ksize()) will be copied, including the garbage/redzone. > I think it's fixable because when we do this in slub_debug, we also > store the original size in the metadata, so we could read it back and > adjust how many bytes are copied. > > Then we could guarantee that if __GFP_ZERO is used consistently on > initial kmalloc() and on krealloc() and the user doesn't corrupt the > extra space themselves (which is a bug anyway that the redzoning is > supposed to catch) all will be fine. Ok, so those 18 users are indeed currently broken, but only when slub_debug is enabled (assuming that all of those are consistently growing with __GFP_ZERO). > > There might be also KASAN side to this, I see poison_kmalloc_redzone() > is also redzoning the area between requested size and cache's object_size? > > >> > >>> > >>> nommu would still uses krealloc() though... > >>> > >>>> > >>>> But it's also fundamentally not true for krealloc(), or kvrealloc() > >>>> switching from a kmalloc to valloc. ksize() returns the size of the kmalloc > >>>> bucket, we don't know what was the exact prior allocation size. > >>> > >>> Probably a stupid question, but can't we just zero the full bucket initially and > >>> make sure to memset() spare memory in the bucket to zero when krealloc() is > >>> called with new_size < ksize()? > >>> > >>>> Worse, we > >>>> started poisoning the padding in debug configurations, so even a > >>>> kmalloc(__GFP_ZERO) followed by krealloc(__GFP_ZERO) can give you unexpected > >>>> poison now... > >>> > >>> As in writing magics directly to the spare memory in the bucket? Which would > >>> then also be copied over to a new buffer in __do_krealloc()? > >>> > >>>> > >>>> I guess we should just document __GFP_ZERO is not honored at all for > >>>> realloc, and maybe start even warning :/ Hopefully nobody relies on that. > >>> > >>> I think it'd be great to make __GFP_ZERO work in all cases. However, if that's > >>> really not possible, I'd prefer if we could at least gurantee that > >>> *realloc(NULL, size, flags | __GFP_ZERO) is a valid call, i.e. > >>> WARN_ON(p && flags & __GFP_ZERO). > >>> > >>>> > >>>>> + * > >>>>> + * If @p is %NULL, vrealloc() behaves exactly like vmalloc(). If @size is 0 and > >>>>> + * @p is not a %NULL pointer, the object pointed to is freed. > >>>>> + * > >>>>> + * Return: pointer to the allocated memory; %NULL if @size is zero or in case of > >>>>> + * failure > >>>>> + */ > >>>>> +void *vrealloc_noprof(const void *p, size_t size, gfp_t flags) > >>>>> +{ > >>>>> + size_t old_size = 0; > >>>>> + void *n; > >>>>> + > >>>>> + if (!size) { > >>>>> + vfree(p); > >>>>> + return NULL; > >>>>> + } > >>>>> + > >>>>> + if (p) { > >>>>> + struct vm_struct *vm; > >>>>> + > >>>>> + vm = find_vm_area(p); > >>>>> + if (unlikely(!vm)) { > >>>>> + WARN(1, "Trying to vrealloc() nonexistent vm area (%p)\n", p); > >>>>> + return NULL; > >>>>> + } > >>>>> + > >>>>> + old_size = get_vm_area_size(vm); > >>>>> + } > >>>>> + > >>>>> + if (size <= old_size) { > >>>>> + /* > >>>>> + * TODO: Shrink the vm_area, i.e. unmap and free unused pages. > >>>>> + * What would be a good heuristic for when to shrink the > >>>>> + * vm_area? > >>>>> + */ > >>>>> + return (void *)p; > >>>>> + } > >>>>> + > >>>>> + /* TODO: Grow the vm_area, i.e. allocate and map additional pages. */ > >>>>> + n = __vmalloc_noprof(size, flags); > >>>>> + if (!n) > >>>>> + return NULL; > >>>>> + > >>>>> + if (p) { > >>>>> + memcpy(n, p, old_size); > >>>>> + vfree(p); > >>>>> + } > >>>>> + > >>>>> + return n; > >>>>> +} > >>>>> + > >>>>> #if defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA32) > >>>>> #define GFP_VMALLOC32 (GFP_DMA32 | GFP_KERNEL) > >>>>> #elif defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA) > >>>> >