On Thu, Feb 4, 2021 at 10:59 AM KP Singh <kpsingh@xxxxxxxxxx> wrote: > > On Thu, Feb 4, 2021 at 5:52 AM Andrii Nakryiko > <andrii.nakryiko@xxxxxxxxx> wrote: > > > > On Tue, Feb 2, 2021 at 2:16 PM KP Singh <kpsingh@xxxxxxxxxx> wrote: > > > > > > The script runs the BPF selftests locally on the same kernel image > > > as they would run post submit in the BPF continuous integration > > > framework. > > > > > > The goal of the script is to allow contributors to run selftests locally > > > in the same environment to check if their changes would end up breaking > > > the BPF CI and reduce the back-and-forth between the maintainers and the > > > developers. > > > > > > Tested-by: Jiri Olsa <jolsa@xxxxxxxxxx> > > > Signed-off-by: KP Singh <kpsingh@xxxxxxxxxx> > > > --- > > > > I almost applied it :) But found two problems still, which ruins > > experience in my environment, see below. > > > > Also, do you mind renaming the script (and updating the doc in patch > > #2)to vmtest.sh for a shorter name without underscores? > > Sure, I like vmtest.sh better too. > > > > > First problem is that it still doesn't propagate exit codes properly. > > Try ./run_in_vm.sh -- false, followed by echo $? It should print 1, > > but currently it prints zero. > > So propagating the error from the script that ran in the VM would I > think be a little > tricky. This is just the error from the wrapper script. > > I can take a stab at it in a later patch (hope that's okay for now) as it's > not trivial [at least in my head] as we might have to save the status in a file, > copy the file back to the host and then use that status code instead or > do something socket / SSH. > Yeah, follow up is ok. Storing in file and returning that seems ok, similar to what you do with logs. > > > > > tools/testing/selftests/bpf/run_in_vm.sh | 368 +++++++++++++++++++++++ > > > 1 file changed, 368 insertions(+) > > > create mode 100755 tools/testing/selftests/bpf/run_in_vm.sh > > > > > > > [...] > > > > > + > > > +update_kconfig() > > > +{ > > > + local kconfig_file="$1" > > > + local update_command="curl -sLf ${KCONFIG_URL} -o ${kconfig_file}" > > > + # Github does not return the "last-modified" header when retrieving the > > > + # raw contents of the file. Use the API call to get the last-modified > > > + # time of the kernel config and only update the config if it has been > > > + # updated after the previously cached config was created. This avoids > > > + # unnecessarily compiling the kernel and selftests. > > > + if [[ -f "${kconfig_file}" ]]; then > > > + local last_modified_date="$(curl -sL -D - "${KCONFIG_API_URL}" -o /dev/null | \ > > > + grep "last-modified" | awk -F ': ' '{print $2}')" > > > + local remote_modified_timestamp="$(date -d "${last_modified_date}" +"%s")" > > > + local local_creation_timestamp="$(-c %W "${kconfig_file}")" > > > + > > > > %W breaks the entire experience for me. stat -c %W returns 0 in my > > environment, don't know why. But it's also not clear why %W (file > > creation time) was used instead of %Y (file modification time)? When > > we overwrite latest.config, it will get updated modification time, but > > old creation time, so this whole idea with %W seems wrong? > > > > So, do you mind switching to local_modification_timestamp with %Y? I > > checked locally, it finally allowed to skip rebuilding both the kernel > > and selftests. > > Sure, I can switch to %Y. Both seem to work for me. > > > > > > + if [[ "${remote_modified_timestamp}" -gt "${local_creation_timestamp}" ]]; then > > > + ${update_command} > > > + fi > > > + else > > > + ${update_command} > > > + fi > > > +} > > > + > > > > [...]