On Wed, Apr 22, 2020 at 04:40:55PM +0200, Thomas Zimmermann wrote: > With limited VRAM available, fragmentation can lead to OOM errors. > Alternating between bottom-up and top-down placement keeps BOs near the > ends of the VRAM and the available pages consecutively near the middle. > > A real-world example with 16 MiB of VRAM is shown below. > > > cat /sys/kernel/debug/dri/0/vram-mm > 0x0000000000000000-0x000000000000057f: 1407: free > 0x000000000000057f-0x0000000000000b5b: 1500: used > 0x0000000000000b5b-0x0000000000000ff0: 1173: free > > The first free area was the location of the fbdev framebuffer. The used > area is Weston's current framebuffer of 1500 pages. Weston now cannot > do a pageflip to another 1500 page-wide framebuffer, even though enough > pages are available. The patch resolves this problem to > > > cat /sys/kernel/debug/dri/0/vram-mm > 0x0000000000000000-0x00000000000005dc: 1500: used > 0x00000000000005dc-0x0000000000000a14: 1080: free > 0x0000000000000a14-0x0000000000000ff0: 1500: used > > with both of Weston's framebuffers located near the ends of the VRAM > memory. I don't think it is that simple. First: How will that interact with cursor bo allocations? IIRC the strategy for them is to allocate top-down, for similar reasons (avoid small cursor bo allocs fragment vram memory). Second: I think ttm will move bo's from vram to system only on memory pressure. So you can still end up with fragmented memory. To make the scheme with one fb @ top and one @ bottom work reliable you have to be more aggressive on pushing out framebuffers. Third: I'd suggest make topdown allocations depending on current state instead of simply alternating, i.e. if there is a pinned framebuffer @ offset 0, then go for top-down. I also think using this scheme should be optional. In the simplest case we can allow drivers opt-in. Or we try do to something clever automatically: using the strategy only for framebuffers larger than 1/4 or 1/3 of total vram memory (which makes alloc failures due to fragmentation much more likely). cheers, Gerd _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel