On 02/27, Will Drewry wrote: > > On Mon, Feb 27, 2012 at 11:09 AM, Oleg Nesterov <oleg@xxxxxxxxxx> wrote: > > >> +static long seccomp_attach_filter(struct sock_fprog *fprog) > >> +{ > >> + struct seccomp_filter *filter; > >> + unsigned long fp_size = fprog->len * sizeof(struct sock_filter); > >> + long ret; > >> + > >> + if (fprog->len == 0 || fprog->len > BPF_MAXINSNS) > >> + return -EINVAL; > > > > OK, this limits the memory PR_SET_SECCOMP can use. > > > > But, > > > >> + /* > >> + * If there is an existing filter, make it the prev and don't drop its > >> + * task reference. > >> + */ > >> + filter->prev = current->seccomp.filter; > >> + current->seccomp.filter = filter; > >> + return 0; > > > > this doesn't limit the number of filters, looks like a DoS. > > > > What if the application simply does prctl(PR_SET_SECCOMP, dummy_filter) > > in an endless loop? > > It consumes a massive amount of kernel memory and, maybe, the OOM > killer gives it a boot :) may be ;) but most probably oom-killer kills another innocent task, this memory is not accounted. > I wasn't sure what the normal convention was for avoiding memory > consumption by user processes. Should I just add a sysctl Perhaps we can add a sysctl later, but personally I think that we can start with some "arbitrary" #define BPF_MAXFILTERS. > and a > per-task counter for the max number of filters? Do we really need the counter? attach_filter is not the fast path, perhaps seccomp_attach_filter() could simply iterate the chain and count the number? In any case, if this hurts perfomance-wise then seccomp_run_filters() has even more problems. > I'm fine doing whatever makes sense here. I am fine either way too. Oleg. -- To unsubscribe from this list: send the line "unsubscribe linux-arch" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html