On Thu, Oct 22, 2020 at 06:40:08PM -0500, YiFei Zhu wrote: > On Thu, Oct 22, 2020 at 5:32 PM Kees Cook <keescook@xxxxxxxxxxxx> wrote: > > I've been going back and forth on this, and I think what I've settled > > on is I'd like to avoid new CONFIG dependencies just for this feature. > > Instead, how about we just fill in SECCOMP_NATIVE and SECCOMP_COMPAT > > for all the HAVE_ARCH_SECCOMP_FILTER architectures, and then the > > cache reporting can be cleanly tied to CONFIG_SECCOMP_FILTER? It > > should be relatively simple to extract those details and make > > SECCOMP_ARCH_{NATIVE,COMPAT}_NAME part of the per-arch enabling patches? > > Hmm. So I could enable the cache logic to every architecture (one > patch per arch) that does not have the sparse syscall numbers, and > then have the proc reporting after the arch patches? I could do that. > I don't have test machines to run anything other than x86_64 or ia32, > so they will need a closer look by people more familiar with those > arches. Cool, yes please. It looks like MIPS will need to be skipped for now. I would have the debug cache reporting patch then depend on !CONFIG_HAVE_SPARSE_SYSCALL_NR. > > I'd still like to get more specific workload performance numbers too. > > The microbenchmark is nice, but getting things like build times under > > docker's default seccomp filter, etc would be lovely. I've almost gotten > > there, but my benchmarks are still really noisy and CPU isolation > > continues to frustrate me. :) > > Ok, let me know if I can help. Do you have a test environment where you can compare the before/after of repeated kernel build times (or some other sufficiently complex/interesting) workload under these conditions: bare metal docker w/ seccomp policy disabled docker w/ default seccomp policy This is what I've been trying to construct, but it's really noisy, so I've been trying to pin CPUs and NUMA memory nodes, but it's not really helping yet. :P -- Kees Cook