Re: [PATCH] drm/vmwgfx: Filter modes which exceed 3/4 of graphics memory.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 31 Jan 2024 at 02:31, Zack Rusin <zack.rusin@xxxxxxxxxxxx> wrote:
> On Tue, Jan 30, 2024 at 6:50 PM Daniel Stone <daniel@xxxxxxxxxxxxx> wrote:
> > The entire model we have is that basis timing flows backwards. The
> > 'hardware' gives us a deadline, KMS angles to meet that with a small
> > margin, the compositor angles to meet that with a margin again, and it
> > lines up client repaints to hit that window too. Everything works on
> > that model, so it's not super surprising that using svga is - to quote
> > one of Weston's DRM-backend people who uses ESXi - 'a juddery mess'.
>
> That's very hurtful. Or it would be but of course you didn't believe
> them because they're working on Weston so clearly don't make good
> choices in general, right? The presentation on esxi is just as smooth
> as it is by default on Ubuntu on new hardware...

Yeah sorry, that wasn't a 'VMware is bad' dig, that was a 'oh that
explains so much if you're deliberately doing the other thing'
realisation. I'm not suggesting anyone else use my life choices as a
template :)

> > Given that the entire ecosystem is based on this model, I don't think
> > there's an easy way out where svga just does something wildly
> > different. The best way to fix it is to probably work on predictable
> > quantisation with updates: pick 5/12/47/60Hz to quantise to based on
> > your current throughput, with something similar to hotplug/LINK_STATUS
> > and faked EDID to let userspace know when the period changes. If you
> > have variability within the cycle, e.g. dropped frames, then just suck
> > it up and keep the illusion alive to userspace that it's presenting to
> > a fixed period, and if/when you calculate there's a better
> > quantisation then let userspace know what it is so it can adjust.
> >
> > But there's really no future in just doing random presentation rates,
> > because that's not the API anyone has written for.
>
> See, my hope was that with vrr we could layer the weird remote
> presentation semantics of virtualized guest on top of the same
> infrastructure that would be used on real hardware. If you're saying
> that it's not the way userspace will work, then yea, that doesn't
> help. My issue, that's general for para-virtualized drivers, is that
> any behavior that differs from hw drivers means that it's going to
> break at some point, we see that even for basic things like the
> update-layout hotplug events that have been largely standardized for
> many years. I'm assuming that refresh-rate-changed will result in the
> same regressions, but fwiw if I can implement FRR correctly and punt
> any issues that arise due to changes in the FRR as issues in userspace
> then that does make my life a lot easier, so I'm not going to object
> to that.

Yeah, I think that's the best way forward ... modelling the full
pipeline in all its glory starts to look way less like KMS and way
more like something like GStreamer. Trying to encapsulate all that
reasonably in the kernel would've required - at the very least - a
KMS-side queue with target display times in order to be at all useful,
and that seemed like way too much complexity when the majority of
hardware could be handled with 'I'll fire an ioctl at you and you
update at the next 16ms boundary'.

I'd be super happy to review any uAPI extensions which added feedback
to userspace to let it know that the optimal presentation cadence had
changed.

Cheers,
Daniel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux