Thomas, Hello, I'm the user who reported the issue. Definitely happy to help you sort this out if I can, though my response speed will decrease when term restarts in October. > I'd be interested in the exact model and the unique_rev_id > (you said A, rev1 ?) The machine is an Intel SR1625URR server including an S5520UR motherboard. Table 10 in the following document says that 1440x900@60Hz is supported: https://www.intel.com/content/dam/support/us/en/documents/motherboards/server/s5520ur/sb/e44031012_s5520ur_s5520urt_tps_r1_9.pdf lspci -v returns: 07:00.0 VGA compatible controller: Matrox Electronics Systems Ltd. MGA G200e [Pilot] ServerEngines (SEP1) (rev 02) (prog-if 00 [VGA controller]) Subsystem: Intel Corporation Device 0101 Physical Slot: 5 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at b0000000 (32-bit, prefetchable) [size=16M] Memory at b1800000 (32-bit, non-prefetchable) [size=16K] Memory at b1000000 (32-bit, non-prefetchable) [size=8M] Expansion ROM at 000c0000 [disabled] [size=128K] Capabilities: <access denied> Kernel driver in use: mgag200 Kernel modules: mgag200 so in particular the chip is said to be a G200e, not the G200SE-A that the kernel module seems to be interpreting it as. In the lspci return it calls itself "rev 02", but the unique_rev_id returned is 0x01, not 02, and not 00. (My originally suggested solution was that "rev 02" might correspond to unique_rev_id=0x01 and that one should add 1 to the unique_rev_id, but Jocelyn indicated that isn't right.) I instrumented a version of the new code by adding printk statements to a version of the module embodied in a kmod-mgag200 package and observing messages in the /var/log/messages file. These tell me that: > and if the early-out branches in mga_vga_calculate_mode_bandwidth() > are being taken. In the "old" code the options to return 0 are NOT being taken, and the bandwidth returned is the expected value of 30318019. In the *new* code the options to return 0 are NOT being taken, and the bandwidth returned is the expected value of 30318019. > Can you figure out how exactly the CPU moves through > mga_vga_mode_valid(). In the "old" code we enter the true part of the if (IS_G200_SE(mdev)), then the true part of the if (unique_rev_id == 0x01), then return MODE_BANDWIDTH (i.e. MODE_BAD) at the third if statement in that block. In the *new* code the nearest-named function I could see is mgag200_mode_config_mode_valid, which returns MODE_OK at the end of the function if the bandwidth limit is increased to 30100, and returns MODE_BAD three lines higher up if it is left at 24400. Moreover if when using the old code we switch to Wayland instead of Xorg, it doesn't let me pick the 1440x900@60Hz mode at all, but it does with Xorg (one of the reasons I hadn't used Wayland). Therefore I think the reason that the old code allowed use of 1440x900@60Hz was that Xorg somehow didn't properly check the return value from mga_vga_mode_valid, but Wayland did. Moreover I think that the latest version of the Xorg stuff does PARTIALLY check that return value, to the extent that it won't let you actually use that mode, but does nonetheless present it as a choice when you go to Settings->Display - and then saves the values it didn't allow you to take in ~/.config/monitors.xml, and on relogin refuses to give you any graphics at all because it doesn't like those values. But that, of course, is nothing to do with the mgag200 driver (if it is indeed true - I haven't looked at the Xorg source code at all). The issue from the point of view of my usage case is that the chip works just fine in the 1440x900@60Hz mode, even though 30318019 > 1024*24400. If I haven't made anything sufficiently clear, or if you need more info, please ask. Best wishes, Roger.