Patch "devmap: Use bpf_map_area_alloc() for allocating hash buckets" has been added to the 5.4-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    devmap: Use bpf_map_area_alloc() for allocating hash buckets

to the 5.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     devmap-use-bpf_map_area_alloc-for-allocating-hash-bu.patch
and it can be found in the queue-5.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit abffbb10a41e1b6608341a913e9f5f0ea6ae5974
Author: Toke Høiland-Jørgensen <toke@xxxxxxxxxx>
Date:   Tue Jun 16 16:28:29 2020 +0200

    devmap: Use bpf_map_area_alloc() for allocating hash buckets
    
    [ Upstream commit 99c51064fb06146b3d494b745c947e438a10aaa7 ]
    
    Syzkaller discovered that creating a hash of type devmap_hash with a large
    number of entries can hit the memory allocator limit for allocating
    contiguous memory regions. There's really no reason to use kmalloc_array()
    directly in the devmap code, so just switch it to the existing
    bpf_map_area_alloc() function that is used elsewhere.
    
    Fixes: 6f9d451ab1a3 ("xdp: Add devmap_hash map type for looking up devices by hashed index")
    Reported-by: Xiumei Mu <xmu@xxxxxxxxxx>
    Signed-off-by: Toke Høiland-Jørgensen <toke@xxxxxxxxxx>
    Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx>
    Acked-by: John Fastabend <john.fastabend@xxxxxxxxx>
    Link: https://lore.kernel.org/bpf/20200616142829.114173-1-toke@xxxxxxxxxx
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index b4b6b77f309c6..6684696fa4571 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -88,12 +88,13 @@ struct bpf_dtab {
 static DEFINE_SPINLOCK(dev_map_lock);
 static LIST_HEAD(dev_map_list);
 
-static struct hlist_head *dev_map_create_hash(unsigned int entries)
+static struct hlist_head *dev_map_create_hash(unsigned int entries,
+					      int numa_node)
 {
 	int i;
 	struct hlist_head *hash;
 
-	hash = kmalloc_array(entries, sizeof(*hash), GFP_KERNEL);
+	hash = bpf_map_area_alloc(entries * sizeof(*hash), numa_node);
 	if (hash != NULL)
 		for (i = 0; i < entries; i++)
 			INIT_HLIST_HEAD(&hash[i]);
@@ -151,7 +152,8 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
 		INIT_LIST_HEAD(per_cpu_ptr(dtab->flush_list, cpu));
 
 	if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
-		dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets);
+		dtab->dev_index_head = dev_map_create_hash(dtab->n_buckets,
+							   dtab->map.numa_node);
 		if (!dtab->dev_index_head)
 			goto free_percpu;
 
@@ -249,7 +251,7 @@ static void dev_map_free(struct bpf_map *map)
 			}
 		}
 
-		kfree(dtab->dev_index_head);
+		bpf_map_area_free(dtab->dev_index_head);
 	} else {
 		for (i = 0; i < dtab->map.max_entries; i++) {
 			struct bpf_dtab_netdev *dev;



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux