* Pierre-Loup A. Griffais <pgriffais@xxxxxxxxxxxxxxxxx> [240414 20:22]: > > ... > > > > To be clear, what you are doing here is akin to adding more memory to > > your system when there is a memory leak. This is not the solution you > > should be pushing. Ironically, this is using more memory and performing > > worse than it should. At best, the limit increase is a workaround for > > buggy programs. > > > > At worst, you are enabling bad things to keep happening and normalising > > poor programming choices. Please put pressure on the applications that > > clearly have issues. > > We don't get to prescribe what those applications do. The fact of the matter > is that there are several high-performance memory allocators in wide use by > game applications that make heavy internal use of mmap(), and that using > hundreds of thousands of different memory mappings is well supported on the > platform those applications were written for. (or mapping regions with > different permissions, which results in different regions after platform > translation to Linux happens within Wine) Thank you for the information on the situation that causes the kernel to use such a large number of vmas. The mmap operations will run faster if there are significantly less vmas. Having such a large number of objects will cause the faulting of information into the memory to be slower, and that would hold true for all platforms. If this is for high-performance, then it would be unlikely that it was designed to run with 65,530 objects to search. It is also odd that there are several allocators running into the same issue. If I were to guess, the allocators are trying to bypass the operating systems use of memory and implement another way of tracking it specific to your usecase for speed. It sounds like it is being translated incorrectly and causing a monster data structure to track it on the kernel side. If it's a translation layer in wine making a decision on how to translate a particular set of calls then it could be fixed, or at least examined for inefficiencies. Either way, the performance will be sub-optimal on the page fault path (probably the most common) and any other path that uses such a large number of vmas. > > Pointing out that there exists one game that doesn't happen to do that is > not terribly useful for the purpose of this discussion. I provided the data I could collect reasonably quickly, but the scale of the difference was the important part of my statement. > > The problem statement seems pretty simple - distributions that want to > support those usecases out of the box can make that change, like we've done > for years on SteamOS. On those that don't, users of those applications will > have to discover and learn to apply the change by hand after having a likely > sub-par experience trying to get their game up and running. This number of vmas is indicating an issue with the utilisation of the virtual memeory areas. Increasing the limit is allowing the game to run, but it will not be performant. It is unfortunate that the solution was to increase the value. > > I've yet to hear a specific downside of making the change other than a real > concern about DoS of kernel memory in another discussion - it seems to me > like there is much lower hanging fruit for DoSing a Linux system you have > shell access to, at the moment. Poor performance is the downside. The specific downside is the overly large data structure that the kernel has to navigate on every page fault or any other vma operation. This isn't specific to changing the number, but to the fact that it needed to be changed in the first place. Is there an upper limit of vmas that you have seen? Can you provide a copy of the mappings when you see this for testing? This works out to a 5 level maple tree. Thanks, Liam