On Thu, Sep 24, 2020 at 07:44:19AM -0500, YiFei Zhu wrote: > From: YiFei Zhu <yifeifz2@xxxxxxxxxxxx> > > The fast (common) path for seccomp should be that the filter permits > the syscall to pass through, and failing seccomp is expected to be > an exceptional case; it is not expected for userspace to call a > denylisted syscall over and over. > > This first finds the current allow bitmask by iterating through > syscall_arches[] array and comparing it to the one in struct > seccomp_data; this loop is expected to be unrolled. It then > does a test_bit against the bitmask. If the bit is set, then > there is no need to run the full filter; it returns > SECCOMP_RET_ALLOW immediately. > > Co-developed-by: Dimitrios Skarlatos <dskarlat@xxxxxxxxxx> > Signed-off-by: Dimitrios Skarlatos <dskarlat@xxxxxxxxxx> > Signed-off-by: YiFei Zhu <yifeifz2@xxxxxxxxxxxx> > --- > kernel/seccomp.c | 37 +++++++++++++++++++++++++++++++++++++ > 1 file changed, 37 insertions(+) > > diff --git a/kernel/seccomp.c b/kernel/seccomp.c > index 20d33378a092..ac0266b6d18a 100644 > --- a/kernel/seccomp.c > +++ b/kernel/seccomp.c > @@ -167,6 +167,12 @@ static inline void seccomp_cache_inherit(struct seccomp_filter *sfilter, > const struct seccomp_filter *prev) > { > } > + > +static inline bool seccomp_cache_check(const struct seccomp_filter *sfilter, > + const struct seccomp_data *sd) > +{ > + return false; > +} > #endif /* CONFIG_SECCOMP_CACHE_NR_ONLY */ > > /** > @@ -321,6 +327,34 @@ static int seccomp_check_filter(struct sock_filter *filter, unsigned int flen) > return 0; > } > > +#ifdef CONFIG_SECCOMP_CACHE_NR_ONLY > +/** > + * seccomp_cache_check - lookup seccomp cache > + * @sfilter: The seccomp filter > + * @sd: The seccomp data to lookup the cache with > + * > + * Returns true if the seccomp_data is cached and allowed. > + */ > +static bool seccomp_cache_check(const struct seccomp_filter *sfilter, > + const struct seccomp_data *sd) > +{ > + int syscall_nr = sd->nr; > + int arch; > + > + if (unlikely(syscall_nr < 0 || syscall_nr >= NR_syscalls)) > + return false; This protects us from x32 (i.e. syscall_nr will have 0x40000000 bit set), but given the effort needed to support compat, I think supporting x32 isn't much more. (Though again, I note that NR_syscalls differs in size, so this test needs to be per-arch and obviously after arch-discovery.) That said, if it really does turn out that x32 is literally the only architecture doing these shenanigans (and I suspect not, given the MIPS case), okay, fine, I'll give in. :) You and Jann both seem to think this isn't worth it. > + > + for (arch = 0; arch < ARRAY_SIZE(syscall_arches); arch++) { > + if (likely(syscall_arches[arch] == sd->arch)) I think this linear search for the matching arch can be made O(1) (this is what I was trying to do in v1: we can map all possible combos to a distinct bitmap, so there is just math and lookup rather than a linear compare search. In the one-arch case, it can also be easily collapsed into a no-op (though my v1 didn't do this correctly). > + return test_bit(syscall_nr, > + sfilter->cache.syscall_ok[arch]); > + } > + > + WARN_ON_ONCE(true); > + return false; > +} > +#endif /* CONFIG_SECCOMP_CACHE_NR_ONLY */ > + > /** > * seccomp_run_filters - evaluates all seccomp filters against @sd > * @sd: optional seccomp data to be passed to filters > @@ -343,6 +377,9 @@ static u32 seccomp_run_filters(const struct seccomp_data *sd, > if (WARN_ON(f == NULL)) > return SECCOMP_RET_KILL_PROCESS; > > + if (seccomp_cache_check(f, sd)) > + return SECCOMP_RET_ALLOW; > + > /* > * All filters in the list are evaluated and the lowest BPF return > * value always takes priority (ignoring the DATA). > -- > 2.28.0 > -- Kees Cook