Re: [LSF/MM/BPF TOPIC] Increasing automation of filesystem testing with kdevops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Improving the automation of fs testing is hugely important - with SMB3
testing (cifs.ko kernel client and now kernel server) we leverage the
buildbot and many xfstests against five distinct server types.   NFS
and SMB3 mounts can be used to save off tests, images, git trees etc.

But ... trying to get the tests to run for sane lengths (we are up to
more than 4 hours for our default regression bucket which is a little
longer than I would like) is not trivial and proving code coverage
value of adding additional tests and finding holes in what xfstests
doesn't cover is not as easy as it sounds.

On Thu, Feb 13, 2020 at 2:12 PM Luis Chamberlain <mcgrof@xxxxxxxxxx> wrote:
>
> Ever since I've taken a dive into filesystems I've been trying to
> further automate filesytem setup / testing / collection of results.
> I had looked at xfstests-bld [0] but was not happy with it being cloud
> specific to Google Compute Engine, and so I have been shopping around
> for technology / tooling which would be cloud agnostic / virtualization
> agnostic.
>
> At the last LSFMM in Puerto Rico the project oscheck [1] was mentioned a
> few times as a mechanism as to how to help get set up fast with fstests,
> however *full* automation to include running the tests, processing
> results, and updating a baseline was really part of the final plan.
> I had not completed the work yet by LSFM Puerto Rico, so could not
> talk about the work. The majority of the effort is now complete
> and is part of kdevops [2], now a more generic framework to help automated
> kernel development testing. I've written a tiny bit about it [3]. Due to
> the nature of LSFMM I don't want to present the work, unless folks
> really want me to, so would rather have a discussion over technologies
> used, pain points to consider, some future ideas, and see what others
> are doing. May be worth just as a simple BoF.
>
> So let me start in summary style with some of these on my end.
>
> Technologies used:
>
>   * vagrant / terraform
>   * ansible
>
> Pain points:
>
>   * What fstests doesn't cover, or an auto-chinner needed:
>     - fsmark regressions, for instance:
>       https://lkml.org/lkml/2013/9/10/46
>   * vagrant-libvirt is not yet part of upstream vagrant but neeed
>     for use with KVM
>   * Reliance on only one party (Hashi Corp) for the tooling, even though
>     its all open source
>   * Vagrant's dependency on ruby and several ruby gems
>   * terraform's reliance on tons of go modules
>   * "Enterpise Linux" considerations for all the above
>
> Future ideas:
>
>   * Using 9pfs for sharing git trees
>   * Does xunit suffice?
>   * Evaluating which tests can be folded under kunit
>   * Evaluating running one test per container so to fully parallelize testing
>
> [0] https://git.kernel.org/pub/scm/fs/ext2/xfstests-bld.git
> [1] https://github.com/mcgrof/oscheck
> [2] https://github.com/mcgrof/kdevops
> [3] https://people.kernel.org/mcgrof/kdevops-a-devops-framework-for-linux-kernel-development
>
>   Luis



-- 
Thanks,

Steve



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux