Re: [PATCH] drm/mgag200: Increase bandwidth for G200se A rev1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Roger,

thanks for all the information.

Am 24.07.23 um 22:57 schrieb Roger Sewell:
Thomas,

Hello, I'm the user who reported the issue. Definitely happy to help you
sort this out if I can, though my response speed will decrease when term
restarts in October.

I'd be interested in the exact model and the unique_rev_id
(you said A, rev1 ?)

The machine is an Intel SR1625URR server including an S5520UR
motherboard.

Table 10 in the following document says that  1440x900@60Hz is supported:
https://www.intel.com/content/dam/support/us/en/documents/motherboards/server/s5520ur/sb/e44031012_s5520ur_s5520urt_tps_r1_9.pdf

That manual says that the resolution is only supported with at most 24-bit colors. The old X code still supports that to some extend, but modern userspace doesn't.

It's not a Wayland thing, but applications now use Mesa for drawing, which doesn't like 24-bit color mode much. Userspace is slowly loosing the ability to work with anything less the 32-bit colors.


lspci -v returns:

07:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) (prog-if 00 [VGA controller])
	Subsystem: Intel Corporation Device 0101
	Physical Slot: 5
	Flags: bus master, fast devsel, latency 0, IRQ 16
	Memory at b0000000 (32-bit, prefetchable) [size=16M]
	Memory at b1800000 (32-bit, non-prefetchable) [size=16K]
	Memory at b1000000 (32-bit, non-prefetchable) [size=8M]
	Expansion ROM at 000c0000 [disabled] [size=128K]
	Capabilities: <access denied>
	Kernel driver in use: mgag200
	Kernel modules: mgag200

so in particular the chip is said to be a G200e, not the G200SE-A that
the kernel module seems to be interpreting it as. In the lspci return it

It actually is the G200SE-A. It's just named differently by lspci. The PCI device id should be 0x0522.

calls itself "rev 02", but the unique_rev_id returned is 0x01, not 02,
and not 00. (My originally suggested solution was that "rev 02" might
correspond to unique_rev_id=0x01 and that one should add 1 to the
unique_rev_id, but Jocelyn indicated that isn't right.)

That rev 02 if the PCI revision number. Matrox also has another revision ID named 'unique_rev_id' in the code. Who knows why...


I instrumented a version of the new code by adding printk statements to
a version of the module embodied in a kmod-mgag200 package and observing
messages in the /var/log/messages file. These tell me that:

and if the early-out branches in mga_vga_calculate_mode_bandwidth()
are being taken.

In the "old" code the options to return 0 are NOT being taken, and the
bandwidth returned is the expected value of 30318019.

In the *new* code the options to return 0 are NOT being taken, and the
bandwidth returned is the expected value of 30318019.

Can you figure out how exactly the CPU moves through
mga_vga_mode_valid().

In the "old" code we enter the true part of the if (IS_G200_SE(mdev)),
then the true part of the if (unique_rev_id == 0x01), then return
MODE_BANDWIDTH (i.e. MODE_BAD) at the third if statement in that block.

So the old kernel already did the right thing.


In the *new* code the nearest-named function I could see issys/class/drm/card1-eDP-1/modes
mgag200_mode_config_mode_valid, which returns MODE_OK at the end of the
function if the bandwidth limit is increased to 30100, and returns
MODE_BAD three lines higher up if it is left at 24400.

Nothing has changed in the new kernel.


Moreover if when using the old code we switch to Wayland instead of
Xorg, it doesn't let me pick the 1440x900@60Hz mode at all, but it does
with Xorg (one of the reasons I hadn't used Wayland).

You can do

  cat /sys/class/drm/card1-VGA-1/modes

on the old and new kernel. With a monitor connector, it will tell you the supported resolutions.


Therefore I think the reason that the old code allowed use of
1440x900@60Hz was that Xorg somehow didn't properly check the return
value from mga_vga_mode_valid, but Wayland did. Moreover I think that
the latest version of the Xorg stuff does PARTIALLY check that return
value, to the extent that it won't let you actually use that mode, but
does nonetheless present it as a choice when you go to Settings->Display
- and then saves the values it didn't allow you to take in
~/.config/monitors.xml, and on relogin refuses to give you any graphics
at all because it doesn't like those values. But that, of course, is
nothing to do with the mgag200 driver (if it is indeed true - I haven't
looked at the Xorg source code at all).

The issue from the point of view of my usage case is that the chip works
just fine in the 1440x900@60Hz mode, even though 30318019 > 1024*24400.

I don't want to increase that limit in the driver, as it might have consequences for a lot of other hardware. And if you assume that 30318019 * 3 / 4 ~= 22738514 , 24-bit color mode fits into the current limit.

Jocelyn, should we attempt to make extra resolutions available for 16- and 24-bit modes? We could do the bandwith test in the primary plane's atomic_check, where we know the resolution and the color format. The general mode test would use bpp=8. IDK how userspace reacts to that, so it would require some testing.


If I haven't made anything sufficiently clear, or if you need more info,
please ask.

Your reply was very helpful. Thank you.

Best regards
Thomas


Best wishes,
Roger.

--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)

Attachment: OpenPGP_signature
Description: OpenPGP digital signature


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux