Sorry for the late reply. I just came back from my paternity leave :) On 2023/04/05 20:26, Hyeonggon Yoo wrote: > On 3/15/2023 6:54 PM, GONG, Ruiqi wrote: >> When exploiting memory vulnerabilities, "heap spraying" is a common >> technique targeting those related to dynamic memory allocation (i.e. the >> "heap"), and it plays an important role in a successful exploitation. >> Basically, it is to overwrite the memory area of vulnerable object by >> triggering allocation in other subsystems or modules and therefore >> getting a reference to the targeted memory location. It's usable on >> various types of vulnerablity including use after free (UAF), heap out- >> of-bound write and etc. >> >> There are (at least) two reasons why the heap can be sprayed: 1) generic >> slab caches are shared among different subsystems and modules, and >> 2) dedicated slab caches could be merged with the generic ones. >> Currently these two factors cannot be prevented at a low cost: the first >> one is a widely used memory allocation mechanism, and shutting down slab >> merging completely via `slub_nomerge` would be overkill. >> >> To efficiently prevent heap spraying, we propose the following approach: >> to create multiple copies of generic slab caches that will never be >> merged, and random one of them will be used at allocation. The random >> selection is based on the location of code that calls `kmalloc()`, which >> means it is static at runtime (rather than dynamically determined at >> each time of allocation, which could be bypassed by repeatedly spraying >> in brute force). In this way, the vulnerable object and memory allocated >> in other subsystems and modules will (most probably) be on different >> slab caches, which prevents the object from being sprayed. >> >> Signed-off-by: GONG, Ruiqi <gongruiqi1@xxxxxxxxxx> >> --- > > I'm not yet sure if this feature is appropriate for mainline kernel. > > I have few questions: > > 1) What is cost of this configuration, in terms of memory overhead, or > execution time? I haven't done a throughout test on the runtime overhead yet, but in theory it won't be large because in essence what it does is to create some additionally `struct kmem_cache` instances and separate the management of slab objects from the original one cache to all these caches. But indeed the test is necessary. I will do it based on the v2 patch. > > 2) The actual cache depends on caller which is static at build time, not > runtime. > > What about using (caller ^ (some subsystem-wide random sequence)), > > which is static at runtime? Yes it could be better. As I said in my reply to Alexander, I will add a the per-boot random seed in v2, and I think it's basically the `(some subsystem-wide random sequence)` you mentioned here.