On Sat, Nov 9, 2019 at 6:57 AM Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> wrote: > > On Fri, Nov 08, 2019 at 12:29:47PM -0800, Alexander Duyck wrote: > > On Fri, 2019-11-08 at 18:41 +0000, Mel Gorman wrote: > > > On Fri, Nov 08, 2019 at 08:17:49AM -0800, Alexander Duyck wrote: > > > > > <SNIP> > > > > > > > > > > From your perspective, I see it's a bit annoying because in the final > > > > > result, the code should be identical. However, it'll be a lot clearer > > > > > during review what is required, what level of complexity optimisations > > > > > add and the performance of it. The changelog should include what metric > > > > > you are using to evaluate the performance, the test case and the delta. It > > > > > also will be easier from a debugging perspective as minimally a bisection > > > > > could identify if a bug was due to the core mechanism itself or one of > > > > > the optimisations. Finally, it leaves open the possibility that someone > > > > > can evaluate a completely different set of optimisations. Whatever the > > > > > alternative approaches are, the actual interface to virtio ballon surely > > > > > is the same (I don't actually know, I just can't see why the virtio ABI > > > > > would be depend on how the pages are isolated, tracked and reported). > > > > > > > > The virtio-balloon interface is the same at this point between my solution > > > > and Nitesh's. So the only real disagreement in terms of the two solutions > > > > is about keeping the bit in the page and the list manipulation versus the > > > > external bitmap and the hunt and peck approach. > > > > > > > > > > This is good news because it means that when/if Nitesh's approach is ready > > > that the optimisations can be reverted and the new approach applied and > > > give a like-like comparison if appropriate. The core feature and interface > > > to userspace would remain the same and stay available regardless of how > > > it's optimised. Maybe it's the weekend talking but I think structuring > > > the series like that will allow forward progress to be made. > > > > So quick question. > > > > Any issue with me manipulating the lists like you do with the compaction > > code? I ask because most of the overhead I was encountering was likely due > > to walking the list so many times. > > That doesn't surprise me because it was necessary for the fast isolation > in compaction to reduce the overhead when compaction was running at > high frequency. > > > If I do the split/splice style logic > > that should reduce the total number of trips through the free lists since > > I could push the reported pages to the tail of the list. For now I am > > working on that as an alternate patch to the existing reported_boundary > > approach just as an experiment. > > I don't have a problem with that although it should be split out and shared > between compaction and the virtio balloon if possible. The consequences > are that compaction and the balloon might interfere with each other. That > would increase the overhead of compaction and the balloon if they both > were running at the same time. However, given that the balloon will have > a performance impact anyway, I don't think it's worth worrying about > because functionally it should be fine. Actually I may have found a better alternative to achieve the same solution. It looks like there was a function called list_rotate_to_front for dealing with issues like this. Instead of the complicated logic that was used in the compaction code this seems like this might be a better fit as it is pretty simple to use. The only check I have to throw in is one to test for if the entry is first or not, if not then we basically pluck the head of the list out and plant it right in front of the page we want to process on the next iteration. Results so far seem promising. The performance for the non-shuffle case is on par with the v13 set, and I am testing the shuffle case now as I suspect that one will likely show more of a regression. Thanks. - Alex