On Wed, Nov 13, 2019 at 12:38 PM Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote: > > On 11/13/19 4:15 AM, Andrii Nakryiko wrote: > > Add ability to memory-map contents of BPF array map. This is extremely useful > > for working with BPF global data from userspace programs. It allows to avoid > > typical bpf_map_{lookup,update}_elem operations, improving both performance > > and usability. > > > > There had to be special considerations for map freezing, to avoid having > > writable memory view into a frozen map. To solve this issue, map freezing and > > mmap-ing is happening under mutex now: > > - if map is already frozen, no writable mapping is allowed; > > - if map has writable memory mappings active (accounted in map->writecnt), > > map freezing will keep failing with -EBUSY; > > - once number of writable memory mappings drops to zero, map freezing can be > > performed again. > > > > Only non-per-CPU plain arrays are supported right now. Maps with spinlocks > > can't be memory mapped either. > > > > For BPF_F_MMAPABLE array, memory allocation has to be done through vmalloc() > > to be mmap()'able. We also need to make sure that array data memory is > > page-sized and page-aligned, so we over-allocate memory in such a way that > > struct bpf_array is at the end of a single page of memory with array->value > > being aligned with the start of the second page. On deallocation we need to > > accomodate this memory arrangement to free vmalloc()'ed memory correctly. > > > > Cc: Rik van Riel <riel@xxxxxxxxxxx> > > Cc: Johannes Weiner <hannes@xxxxxxxxxxx> > > Acked-by: Song Liu <songliubraving@xxxxxx> > > Signed-off-by: Andrii Nakryiko <andriin@xxxxxx> > > Overall set looks good to me! One comment below: > > [...] > > @@ -117,7 +131,20 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr) > > return ERR_PTR(ret); > > > > /* allocate all map elements and zero-initialize them */ > > - array = bpf_map_area_alloc(array_size, numa_node); > > + if (attr->map_flags & BPF_F_MMAPABLE) { > > + void *data; > > + > > + /* kmalloc'ed memory can't be mmap'ed, use explicit vmalloc */ > > + data = vzalloc_node(array_size, numa_node); > > + if (!data) { > > + bpf_map_charge_finish(&mem); > > + return ERR_PTR(-ENOMEM); > > + } > > + array = data + round_up(sizeof(struct bpf_array), PAGE_SIZE) > > + - offsetof(struct bpf_array, value); > > + } else { > > + array = bpf_map_area_alloc(array_size, numa_node); > > + } > > Can't we place/extend all this logic inside bpf_map_area_alloc() and > bpf_map_area_free() API instead of hard-coding it here? > > Given this is a generic feature of which global data is just one consumer, > my concern is that this reintroduces similar issues that mentioned API was > trying to solve already meaning failing early instead of trying hard and > triggering OOM if the array is large. > > Consolidating this into bpf_map_area_alloc()/bpf_map_area_free() would > make sure all the rest has same semantics. So a bunch of this (e.g, array pointer adjustment in mmapable case) depends on specific layout of bpf_array, while bpf_map_area_alloc() is called for multitude of different maps. What we can generalize, though, is this enforcement of vmalloc() for mmapable case: enforce size is multiple of PAGE_SIZE, bypass kmalloc, etc. I can do that part easily, I refrained because it would require extra bool mmapable flag to bpf_map_area_alloc() and (trivial) update to 13 call sites passing false, I wasn't sure people would like code churn. As for bpf_map_areas_free(), again, adjustment is specific to bpf_array and its memory layout w.r.t. data placement, so I don't think we can generalize it that much. After talking with Johannes, I'm also adding new vmalloc_user_node_flags() API and will specify same RETRY_MAYFAIL and NOWARN flags, so behavior will stay the same. Let me know if you want `bool mmapable` added to bpf_map_area_alloc(). And also if I'm missing how you wanted to generalize other parts, please explain in more details. > > > if (!array) { > > bpf_map_charge_finish(&mem); > > return ERR_PTR(-ENOMEM); > > @@ -365,7 +392,10 @@ static void array_map_free(struct bpf_map *map) > > if (array->map.map_type == BPF_MAP_TYPE_PERCPU_ARRAY) > > bpf_array_free_percpu(array); > > > > - bpf_map_area_free(array); > > + if (array->map.map_flags & BPF_F_MMAPABLE) > > + bpf_map_area_free((void *)round_down((long)array, PAGE_SIZE)); > > + else > > + bpf_map_area_free(array); > > } > > > > static void array_map_seq_show_elem(struct bpf_map *map, void *key, > [...]