Re: [RFC] new uapi policy for drm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 16, 2019 at 04:00:25PM -0400, Alex Deucher wrote:
> On Mon, Oct 14, 2019 at 2:16 PM Dave Airlie <airlied@xxxxxxxxx> wrote:
> >
> > I've kicked this around in my head over the past few weeks but wanted
> > to get some feedback on whether it's a good idea or what impact it
> > might have that I haven't considered.
> >
> > We are getting requests via both amdgpu/amdkfd and i915 for new user
> > APIs for userspace drivers that throw code over the wall instead of
> > being open source developed projects, but we are also seeing it for
> > android drivers and kms properties, and we had that i915 crappy crtc
> > background thing that was for Chrome but Chrome didn't want it.
> >
> > Now this presents a couple of issues:
> >
> > a) these projects don't seem to that good at following our development
> > guidelines, avoid developing userspace features in parallel in the
> > open and having good development implementations before submitting
> > upstream.
> >
> > b) these projects don't have experienced userspace developers
> > reviewing their kernel uapis. One big advantage of adding uapis with
> > mesa developers is they have a lot of experience in the area as well.
> >
> > It's leading me to think I want to just stop all uapi submissions via
> > driver trees, and instead mandate that all driver uapi changes are
> > sent in separate git pull requests to dri-devel, I'd try (with some
> > help) to catch all uapi modifications in normal trees, and refuse
> > pulls that modified uapi.
> >
> > At least I'm considered writing the script and refusing and pulls that
> > have a uapi change that doesn't contain a link to the userspace
> > changes required for it in a public developed repo.
> >
> > Thoughts?
>
> This seems like more hassle for questionable benefits.  I don't know
> that mesa is really any better than any other driver teams with
> respect to UAPI.  This just seems like sort of a arbitrary political
> decision.  The people working on mesa have as much of an agenda as
> those working on other projects.  Moreover, especially with the
> migration to gitlab and MRs, I feel that mesa development has gotten
> more opaque.  Say what you will about mailing lists, but at least you
> could have a drive by view of what's going on.  With MRs, you sort of
> have to seek out what to review; if stuff is not tagged with something
> you feel is relevant, you probably won't look at it, so the only
> people likely to review it are the people involved in writing it in
> the first place, which would be the same whether it's mesa or some
> other project.  I think all of the projects generally have the best
> intentions at heart, but for better or worse they just have different
> development models.  In the case of the AMD throw it over the wall
> stuff, it's not really an anti-open source or community engagement
> issue, it's more of how to we support several OSes, tons of new
> products, several custom projects, etc. while leveraging as much
> shared code as possible.  There are ways to make it work, but they are
> usually a pretty heavy lift that not all teams can make.

I think there's a difference between All Tools Sucks (tm) and the
discussions not even being accessible at all. I do agree that generally
everyone screws up uapi once in a while, and we seem to overall do a not
too shoddy job. So code is probably all ok enough.

But imo long term code is fungible and really doesn't matter much, the
important stuff is the people and teams who create it, and all the shared
knowledge. That's also were I see the benefit in upstream (for customers
and vendors and everyone), we can learn from each another. As an example,
I've spent lots of time recently reading amdgpu code and how it's used in
userspace. Understanding that without having access to the discussion or
being able to ping people on irc and mailing lists would have been
impossible - lots of questions where I just plain guessed wrong. For the
code-over-wall projects that stuff all simply doesn't exist. It's nigh
impossible to figure out whether uapi makes sense or not if you can't see
all the tradeoffs and discussions that influenced it and why.

That's also why I think the separate pull won't help at all, since Dave
will still have incomplete information. All he can do with more pulls is
roll the die more often, that's not helping.

Now short term "moar hw support" is cool and all that, but long term I do
think it's missing the point of upstreaming. It's not that mesa (or any
other cross vendor project, we have a bunch of those on the kms side) is
better at uapi, it's that it's more open and so _much_ easier to
understand how we ended up at a specific place. That's at least my take on
all this.

> All of that said, I think providing a link to the userspace user of
> the API is reasonable, but I don't think there have been any egregious
> cases of badly designed UAPI that were not caught using the existing
> processes.

Imo the problem isn't the lack of links, but lack of (public) discussions.
One idea I toyed around with could be to require uapi review for new uapi
by someone outside the wall. That's defacto what we do for everything
pushed through cross-vendor userspace anyway, and it would make sure that
all the design considerations relevant to the uapi would bubble over the
wall too, not just the code. The people and their expertise would still be
in hiding, so still far from nirvana, but I think this would at least move
things meaningfully.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux