On Wed, Sep 20, 2023 at 07:05:25PM -0700, John Hubbard wrote: > On 9/20/23 18:16, Luis Chamberlain wrote: > > On Wed, Sep 20, 2023 at 05:55:51PM -0700, Luis Chamberlain wrote: > > > Are there other known recipes test help test this stuff? > > > > You know, it got me wondering... since how memory fragmented a system > > might be by just running fstests, because, well, we already have > > that automated in kdevops and it also has LBS support for all the > > different large block sizes on 4k sector size. So if we just had a > > way to "measure" or "quantify" memory fragmentation with a score, > > we could just tally up how we did after 4 hours of testing for each > > block size with a set of memory on the guest / target node / cloud > > system. > > > > Luis > > I thought about it, and here is one possible way to quantify > fragmentation with just a single number. Take this with some > skepticism because it is a first draft sort of thing: > > a) Let BLOCKS be the number of 4KB pages (or more generally, then number > of smallest sized objects allowed) in the area. > > b) Let FRAGS be the number of free *or* allocated chunks (no need to > consider the size of each, as that is automatically taken into > consideration). > > Then: > fragmentation percentage = (FRAGS / BLOCKS) * 100% > > This has some nice properties. For one thing, it's easy to calculate. > For another, it can discern between these cases: > > Assume a 12-page area: > > Case 1) 6 pages allocated allocated unevenly: > > 1 page allocated | 1 page free | 1 page allocated | 5 pages free | 4 pages allocated > > fragmentation = (5 FRAGS / 12 BLOCKS) * 100% = 41.7% > > Case 2) 6 pages allocated evenly: every other page is allocated: > > fragmentation = (12 FRAGS / 12 BLOCKS) * 100% = 100% Thanks! Will try this! BTW stress-ng might also be a nice way to do other pathalogical things here. Luis