On Sun, Mar 10, 2024 at 7:43 PM Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> wrote: > > On Sun, Mar 10, 2024 at 3:05 PM Puranjay Mohan <puranjay12@xxxxxxxxx> wrote: > > > > Hi Alexei, > > > > On Sat, Mar 9, 2024 at 5:38 PM Alexei Starovoitov > > <alexei.starovoitov@xxxxxxxxx> wrote: > > > > > > Puranjay, > > > > > > Looks like I have to drop this patch for now, > > > since PMD_SIZE is not a constant on all archs > > > that can be evaluated by a preprocessor. > > > > > > We need to find a different way. > > > > > > > How about we define it like: > > -#define BPF_PROG_PACK_SIZE (PMD_SIZE * num_possible_nodes()) > > +/* > > + * PMD_SIZE is really big for some archs. It doesn't make sense to > > + * reserve too much memory in one allocation. Cap BPF_PROG_PACK_SIZE to > > + * 2MiB * num_possible_nodes(). > > + */ > > +#define BPF_PROG_PACK_SIZE ((PMD_SIZE <= (1 << 21)) ? (PMD_SIZE * > > num_possible_nodes()) \ > > + > > : ((1 << 21) * num_possible_nodes())) > > > > So, it will be computed at runtime. This adds more performance > > overhead but this is called very infrequently > > so shouldn't matter. > > > > Or we can hardcode it as `2MB * num_possible_nodes()` because > > PMD size in most architectures will be 2MB or larger but never smaller. > > I think hard coding is cleaner and less surprising. > I'd like to hear what Song thinks about it. Agreed. Let's just hard code it to 2MiB * num_possible_nodes(). Thanks, Song