Re: [ANN] Media Summit September 16th: Draft Agenda (v5)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Em Fri, 6 Sep 2024 12:36:58 +0200
Sebastian Fricke <sebastian.fricke@xxxxxxxxxxxxx> escreveu:

> Hey Mauro,
> 
> On 06.09.2024 10:11, Mauro Carvalho Chehab wrote:
> >Em Thu, 5 Sep 2024 09:16:27 +0200
> >Hans Verkuil <hverkuil@xxxxxxxxx> escreveu:
> >  
...
> >1. All committers shall use a common procedure for all merges.
> >
> >   This is easy said than done. So, IMO, it is needed some common scripts
> >   to be used by all committers. On my tests when merging two PRs there,
> >   those seems to be the minimal set of scripts that are needed:
> >
> >   a) script to create a new topic branch at linux-media/users/<user>
> >      The input parameter is the message-ID, e. g. something like:
> >
> >	create_media_staging_topic <topic_name> <message_id>
> >
> >      (eventually with an extra parameter with the name of the tree)
> >
> >      It shall use patchwork REST interface to get the patches - or at least
> >      to check if all patches are there (and then use b4).
> >
> >      such script needs to work with a single patch, a patch series and a
> >      pull request.
> >
> >      the message ID of every patch, including the PR should be stored at
> >      the MR, as this will be needed to later update patchwork.
> >
> >   b) once gitlab CI runs, there are two possible outcomes: patches may
> >      pass or not. If they pass, a MR will be created and eventually be
> >      merged.
> >
> >      Either merged or not (because something failed or the patches require
> >      more work), the patchwork status of the patch require changes to
> >      reflect what happened. IMO, another script is needed to update the
> >      patch/patch series/PR on patchwork on a consistent way.
> >
> >      This is actually a *big* gap we have here. I have a script that
> >      manually check patchwork status and the gap is huge. currently,
> >      there are 73 patches that seems to be merged, but patchwork was not
> >      updated.
> >
> >      From what I noticed, some PR submitters almost never update patchwork
> >      after the merges.  
> 
> Interesting I thought that is how it should be? I mean if I send a PR,
> the one taking the PR must decide whether he wants to pull it, so the
> decision whether the set is accepted should be on the one pulling the
> changes and not on the one pushing them. Right?

Yes, but you still need to update previous attempts to submit the
work. So, let's see a patch series at v9 got a PR. It is up to you
to cleanup everything on patchwork from v1 to v8.

Now, the committer handing PR for v9 should be in charge of updating
the status of it and the patches that belong to it, once the patch
is merged or once he takes a decision otherwise.

> >
> >      So another script to solve this gap is needed, doing updates on all
> >      patches that were picked by the first script. Something like:  
> 
> Shouldn't the update happen after the MR has been created instead,
> because only after running the CI we know whether the tests fail or
> pass? From what you say above it sounds like the first script merely
> prepares a topic branch to be tested.

the first script I mentioned prepares and pushes the topic branch. The
patchwork update script (the second one) can be used on several parts
of the workflow:

- before MR: if the committer decides to not merge the changes, because
  it didn't pass on his review;
- after MR pipeline failures;
- after MR being merged: once it reaches media-staging master.

> >      update_patchwork_from_topic <topic_name> <new_status>
> >
> >      This would likely need to use some logic to pick the message IDs
> >      of the patch inside the MR.
> >
> >      Such script could also check for previous versions of the patch
> >      and for identical patches (it is somewhat common to receive identical
> >      patches with trivial fixes from different developers).
> >
> >   Someone needs to work on such script, as otherwise the multi committers
> >   model may fail, and we risk needing to return back to the current model.  
> 
> Just my personal thought: This sounds a bit extreme to me, I mean we are
> currently in the first test run as preparation for the Media-summit
> where we actually want to discuss policies and further requirements.

What I'm saying is that multiple-committers model will only work if
all committers follow a common procedure (still variants between
committers is possible, provided that the minimal set is followed).

If this doesn't happens, maintainers may be forced to do rebases and other
manual fixes, with will actually make life worse for everyone. That's
what I want to avoid by having a common set of scripts for the very basic
tasks: create a topic branch for CI to test and update patchwork.

> >2. The mailbomb script that notifies when a patch is merged at media-stage
> >   we're using right now doesn't work with well with multiple committers.
> >
> >   Right now, the tree at linuxtv runs it, but it might end sending patches
> >   to the author and to linuxtv-commits ML that reached upstream from other
> >   trees. It has some logic to prevent that, but it is not bulletproof.
> >
> >   A replacement script is needed. Perhaps this can be executed together
> >   with the patchwork script (1B) at the committer's machine.
> >
> >3. Staging require different rules, as smatch/spatch/sparse/checkpatch
> >   warnings and errors can be acceptable.  
> 
> I thought that we came to a consensus that only warnings are acceptable?
> If we accept errors in the staging but not in media master, does that
> mean that we ought to send patches to the media staging tree then?
> I'd vote for only allowing patches without errors into the staging tree
> and in the worst case add a list with acceptable errors.

No, you got me wrong. By staging, I'm referring to drivers/staging/
not to the multi-committers tree.

The bar for sending stuff to drivers/staging is lower than for
drivers/media. According with Kernel's documentation at
Documentation/process/2.Process.rst:

	"Current rules require that drivers contributed to staging
	 must, at a minimum, compile properly."

We actually try to be better than that, but still when new stuff is
added to staging, while we try to also ensure no static code checkers
would fail, we may need to relax the rules, as it is not uncommon to
receive drivers that need extra care to follow Kernel coding style
there.

> >4. We need to have some sort of "honour code": if undesired behavior
> >   is noticed, maintainers may temporarily (or permanently) revoke
> >   committer rights.  
> 
> That sounds like something to discuss on the media summit, 

Sure. That's why I'm asking to explicitly add it to the topics to be
discussed there ;-)

It would be nice if someone could come up with a proposal for 
it, although the discussions of such proposal should likely happen via 
email.

Also, the ones to be added there likely need to explicitly ack with
such code before being added to gitlab's permission group.

> revoking
> commit rights shouldn't be hard as you just have to remove push rights
> for that GitLab tree.

Technically yes, but in practice this is usually very hard, as we're
dealing with people that will very likely be very unhappy of getting banned.

Thanks,
Mauro




[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux