Hi Nicolas On Fri, 8 Sept 2023 at 22:36, Nicolas Dufresne <nicolas@xxxxxxxxxxxx> wrote: > > Le vendredi 08 septembre 2023 à 21:44 +0200, Ricardo Ribalda a écrit : > > Hi Nicolas > > > > On Fri, 8 Sept 2023 at 17:44, Nicolas Dufresne <nicolas@xxxxxxxxxxxx> wrote: > > > > > > Le lundi 28 août 2023 à 17:45 +0300, Laurent Pinchart a écrit : > > > > On Mon, Aug 28, 2023 at 04:38:32PM +0200, Hans Verkuil wrote: > > > > > On 28/08/2023 16:26, Laurent Pinchart wrote: > > > > > > On Mon, Aug 28, 2023 at 04:14:56PM +0200, Hans Verkuil wrote: > > > > > > > On 28/08/2023 16:05, Jacopo Mondi wrote: > > > > > > > > On Mon, Aug 28, 2023 at 03:29:41PM +0200, Hans Verkuil wrote: > > > > > > > > > Hi all, > > > > > > > > > > > > > > > > > > We have been working on simplifying the media maintenance, and one part of that is > > > > > > > > > standardizing on build tools, in particular to make it easier for patch submitters > > > > > > > > > to run their patches through the same set of tests that the daily build does. > > > > > > > > > > > > > > > > > > This helps detect issues before you submit your patches. > > > > > > > > > > > > > > > > > > I have been working since July on transforming my hackish scripts to something > > > > > > > > > that is easier to use and of better quality. While there are still a few rough > > > > > > > > > edges, I consider it good enough to have others start to use it. > > > > > > > > > > > > > > > > > > To get the build scripts run: > > > > > > > > > > > > > > > > > > git clone git://linuxtv.org/hverkuil/build-scripts.git > > > > > > > > > > > > > > > > > > All the test builds will happen within this directory. It is completely separate > > > > > > > > > from where you do you normal development, instead you point it to where your > > > > > > > > > git repository is. > > > > > > > > > > > > > > > > > > See the README contained in the build-scripts git repo for all the details on > > > > > > > > > how to set it up. > > > > > > > > > > > > > > > > > > > > > > > > > I've been using your scripts since after ELC-E and I can tell they're > > > > > > > > useful! > > > > > > > > > > > > > > > > > Currently the scripts expect a debian 12-based distro (likely debian 11 will work > > > > > > > > > as well). I have no idea if it works well on Red Hat or Suse. If you use one of > > > > > > > > > those distros, and you get it to work, then a patch updating the README file with > > > > > > > > > the correct list of packages to install would be welcome. > > > > > > > > > > > > > > > > Speaking about distros, I was wondering if you still consider a > > > > > > > > requirement to build all compiler or we should instead try to use the > > > > > > > > distro provided ones when possible to test the distro-shipped version > > > > > > > > ? > > > > > > > > > > > > > > I strongly believe we should build the cross compilers. The reason is that > > > > > > > otherwise you get a wide variety of compiler versions, each with typically > > > > > > > different compiler warnings. It's a pain for a developer to see different > > > > > > > warnings than the person that merges those patches. > > > > > > > > > > > > > > It's a a regular problem that the daily build sees different warnings than > > > > > > > you do locally with the distro's compiler. > > > > > > > > > > > > > > Doing it this way also makes it easier to upgrade to the latest compiler > > > > > > > version, certainly quicker than a distro would do. > > > > > > > > > > > > > > It's about reproducibility, really. > > > > > > > > > > > > There's value in testing with different compiler versions though. The > > > > > > kernel's documented minimum gcc version is v5.1 at the moment. I > > > > > > certainly don't want to build myself with all versions between v5.1 and > > > > > > v13.2, but collectively we could cover more ground. > > > > > > > > > > > > Regardless of this, I already have recent cross compilers (built with > > > > > > buildroot) for ARM and ARM64, and I'd rather use those than duplicating > > > > > > compilers. Anything that consumes extra disk space is a serious > > > > > > hinderance. > > > > > > > > > > Feel free, but you run the risk that your PR is rejected because when I > > > > > run with these compiler versions I see new warnings. The whole point is > > > > > to be able to do the same tests before you make the PR to reduce the risk > > > > > of having to make a v2. > > > > > > > > > > FYI: the cross directory takes about 10 GB on my system. That can be > > > > > streamlined a bit by deleting some temporary directories needed while > > > > > building, probably to something closer to 5-6 GB. > > > > > > > > It may not be huge by itself, but it quickly adds up when you need to > > > > maintain multiple userspace cross-built enviroments, including Chrome > > > > OS, Android, Yocto, ... :-( I have half a TB of disk on my main > > > > development machine, and I would need at least 4 times that to cover my > > > > current needs comfortably. > > > > > > I suppose this is irrelevant if you have a mean to send you PR to a machine that > > > will validate it for you. This is something I'd like to see happen in the > > > future. Considering the very tiny number of devs doing PR, a first step could be > > > to have a shared server in the cloud with the appropriate distro, compilers, and > > > just one more script to run test against a PR URI. This is quite minimal infra > > > and maintenance, since this is identical to what everyone may have locally, plus > > > having to install an SSH server and manages keys. Of course, scripts remains, > > > and can be used locally with of course the possible oops of running something > > > slightly different, but with the benefit of not having to "push somewhere" to > > > validate. > > > > This is something we have just started to work on: > > > > https://docs.google.com/document/d/1HTpk73qqfZLjrrvUwbd4g11wd8e9TkXTXvz5FZBd52g/edit#heading=h.4v9g2243whew > > > > The plan is to be able to test locally and in gitlab. > > Ok,let comment in there, though, I hope something will be sent to the ML form > time to time, since that document will turn down many. I saw the quickly reading It will become part of our documentation once everything is setup. I really like google docs for the early stages of design docs. There is no need to have a gmail account to use it. > "life of a patch", and wasn't very impressed. I'd like to see something a bit > more forward looking, get out of the bubble of "maintainer" testing. Not sure what you mean. Any developer can push their code to gitlab and test it. The only "super-power" of the uploader is being able to press on merge, after the PR has been validated by the CI/CD. Coordinating when all the different PR land should be the job of someone used to our community. Please note that the bar to become a uploader will be much lower than becoming a core maintainer. > > Currently, reading this document, all the benefit of gitlab endup being "for > maintainer only". I'd like to see a better vision for future of patch submission Anyone can use gitlab if they want, but we are not forcing any developer to use it. Anyone can still send patches to the ML and they will eventually find their way on the media tree. > that helps the submitter directly. It is the latency between reviewers to > submitters that kills the flow, the more the submitters can fix by themself, the > better. We also have a long latency from until code is reviewed. Looking forward to your comments on the doc :) > > > > > > > > > We could also have an FDO project and use their infra, which would be a lot > > > nicer imho, but we can't enter FDO without bringing matching sponsorship for the > > > resources we'd be using. At least we should ask first, not serve ourself there. > > > > I aleady got some some google cloud sponsorship for the project if we > > can land it ;) > > That is great news. Make sure to contact FDO admins. What about having a > namespace there ? > > Nicolas > > > > > > > > > > > Nicolas > > > > > > > > > > > > > Please note that running the regression tests using virtme-ng is currently only > > > > > > > > > supported on Debian 12, not on e.g. Ubuntu. Someone is looking into that, and > > > > > > > > > hopefully we can support that in the future. Running regressions tests are > > > > > > > > > primarily useful when making changes to core frameworks and public APIs, and > > > > > > > > > it is possible to run them manually (see the README). > > > > > > > > > > > > > > > > > > Since running this locally can take a fair amount of time, we hope to have > > > > > > > > > build servers available in the future so this can be offloaded. > > > > > > > > > > > > > > > > > > To give an idea of the expected build times: > > > > > > > > > > > > > > > > > > On an AMD Ryzen 9 6900HX (8 cores) a standard build of the staging tree > > > > > > > > > (build.sh -test all) takes 39 minutes. > > > > > > > > > > > > > > > > > > On an AMD Ryzen Threadripper 3970X (32 cores) it takes a bit over 13 minutes. > > > > > > > > > > > > > _______________________________________________ > > > linuxtv-ci mailing list > > > linuxtv-ci@xxxxxxxxxxx > > > https://www.linuxtv.org/cgi-bin/mailman/listinfo/linuxtv-ci > > > > > > > -- Ricardo Ribalda