Re: + mm-introduce-reported-pages.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2019-11-07 at 14:37 -0500, Nitesh Narayan Lal wrote:
> On 11/7/19 1:02 PM, Alexander Duyck wrote:
> > On Thu, 2019-11-07 at 00:33 +0100, David Hildenbrand wrote:
> > > On 06.11.19 18:48, Alexander Duyck wrote:
> > > > On Wed, 2019-11-06 at 17:54 +0100, Michal Hocko wrote:
> > > > > On Wed 06-11-19 08:35:43, Alexander Duyck wrote:
> > > > > > On Wed, 2019-11-06 at 15:09 +0100, David Hildenbrand wrote:
> > > > > > > > Am 06.11.2019 um 13:16 schrieb Michal Hocko <mhocko@xxxxxxxxxx>:
> > > > > > > > 
> > > > > > > > I didn't have time to read through newer versions of this patch series
> > > > > > > > but I remember there were concerns about this functionality being pulled
> > > > > > > > into the page allocator previously both by me and Mel [1][2]. Have those been
> > > > > > > > addressed? I do not see an ack from Mel or any other MM people. Is there
> > > > > > > > really a consensus that we want something like that living in the
> > > > > > > > allocator?
> > > > > > > I don‘t think there is. The discussion is still ongoing (although quiet,
> > > > > > > Nitesh is working on a new version AFAIK). I think we should not rush
> > > > > > > this.
> > > > > > How much time is needed to get a review? I waited 2 weeks since posting
> > > > > > v12 and the only comments I got on the code were from Andrew. Most of this
> > > > > > hasn't changed much since v10 and that was posted back in mid September. I
> > > > > > have been down to making small tweaks here and there and haven't had any
> > > > > > real critiques on the approach since Mel had the comments about conflicts
> > > > > > with compaction which I addressed by allowing compaction to punt the
> > > > > > reporter out so that it could split and splice the lists as it walked
> > > > > > through them.
> > > > > Well, people are busy and MM community is not a large one. I cannot
> > > > > really help you much other than keep poking those people and give
> > > > > reasonable arguments so they decide to ack your patch.
> > > > I get that. But v10 was posted in mid September. Back then we had a
> > > > discussion about addressing what Mel had mentioned and I had mentioned
> > > > then that I had addressed it by allowing compaction to essentially reset
> > > > the reporter to get it out of the list so compaction could do this split
> > > > and splice tumbling logic.
> > > > 
> > > > > I definitely do not intent to nack this work, I just have maintainability
> > > > > concerns and considering there is an alternative approach that does not
> > > > > require to touch page allocator internals and which we need to compare
> > > > > against then I do not really think there is any need to push something
> > > > > in right away. Or is there any pressing reason to have this merged right
> > > > > now?
> > > > The alternative approach doesn't touch the page allocator, however it
> > > > still has essentially the same changes to __free_one_page. I suspect the
> > > Nitesh is working on Michals suggestion to use page isolation instead 
> > > AFAIK - which avoids this.
> > Okay. However it makes it much harder to discuss when we are comparing
> > against code that isn't public. If the design is being redone do we have
> > any ETA for when we will have something to actually compare to?
> 
> If there would have been just the design change then giving definite ETA would
> have been possible.
> However, I also have to fix the performance with (MAX_ORDER - 2). Unlike you,
> I need some time to do that.
> If I just post the code without fixing the performance there will again be an
> unnecessary discussion about the same thing which doesn't make any sense.

Understood. However with that being the case I don't think it is really
fair to say that my patch set needs to wait on yours to be completed
before we can review it and decide if it is acceptable for upstream.

With that said I am working on a v14 to address Mel's review feedback. I
will hopefully have that completed and tested by early next week.

> > > > performance issue seen is mostly due to the fact that because it doesn't
> > > > touch the page allocator it is taking the zone lock and probing the page
> > > > for each set bit to see if the page is still free. As such the performance
> > > > regression seen gets worse the lower the order used for reporting.
> > > > 
> > > > Also I suspect Nitesh's patches are also in need of further review. I have
> > > > provided feedback however my focus ended up being on more the kernel
> > > > panics and 30% performance regression rather than debating architecture.
> > > Please don't take this personally, but I really dislike you taking about 
> > > Niteshs RFCs (!) and pushing for your approach (although it was you that 
> > > was late to the party!) in that way. If there are problems then please 
> > > collaborate and fix instead of using the same wrong arguments over and 
> > > over again.
> > Since Nitesh is in the middle of doing a full rewrite anyway I don't have
> > much to compare against except for the previous set, which still needs
> > fixes.  It is why I mentioned in the cover of the last patch set that I
> > would prefer to not discuss it since I have no visibility into the patch
> > set he is now working on.
> 
> Fair point.
> 
> 
> > > a) hotplug/sparse zones: I explained a couple of times why we can ignore 
> > > that. There was never a reply from you, yet you keep coming up with 
> > > that. I don't enjoy talking to a wall.
> > This gets to the heart of how Nitesh's patch set works. It is assuming
> > that every zone is linear, that there will be no overlap between zones,
> > and that the zones don't really change. These are key architectural
> > assumptions that should really be discussed instead of simply dismissed.
> 
> They are not at all dismissed, they are just kept as a future action item.

Yes, but when we are asked to compare the two solutions pointing out the
areas that are not complete is a valid point. David complained about
talking to a wall, but what does he expect when we is comparing an RFC
versus something that is ready for acceptance. He is going to get a list
of things that still need to be completed and other issues that were
identified with the patch set.



> > I guess part of the difference between us is that I am looking for
> > something that is production ready and not a proof of concept. It sounds
> > like you would prefer this work stays in a proof of concept stage for some
> > time longer.
> 
> In my opinion, it is more about how many use-cases do we want to target
> initially.
> With your patch-set, I agree we can cover more use-case where the solution
> will fit in.
> However, my series might not be suitable for use-cases where
> we have memory hotplug or memory restriction. (This will be the case
> after I fix the issues in the series)

This is the difference between a proof of concept and production code in
my opinion. I have had to go through and cover all the corner cases, make
sure this compiles on architectures other than x86, and try to validate as
much as possible in terms of possible regressions. As such I may have had
to make performance compromises here and there in order to make sure all
those cases are covered.

> > > b) Locking optimizations: Come on, these are premature optimizations and 
> > > nothing to dictate your design. *nobody* but you cares about that in an 
> > > initial approach we get upstream. We can always optimize that.
> > My concern isn't so much the locking as the fact that it is the hunt and
> > peck approach through a bitmap that will become increasingly more stale as
> > you are processing the data. Every bit you have to test for requires
> > taking a zone lock and then probing to see if the page is still free and
> > the right size. My concern is how much time is going to be spent with the
> > zone lock held while other CPUs are waiting on access.
> 
> This can be prevented (at least to an extent) by checking if the page is in
> the buddy before acquiring the lock as I have suggested previously.

Agreed that should help to reduce the pain somewhat. However it still
isn't an exact science since you may find that the page state changes
before you can acquire the zone lock.

> > > c) Kernel panics: Come on, we are talking about complicated RFCs here 
> > > with moving design decisions. We want do discuss *design* and 
> > > *architecture* here, not *implementation details*.
> > Then why ask me to compare performance against it? You were the one
> > pushing for me to test it, not me. If you and Nitesh knew the design
> > wasn't complete enough to run it why ask me to test it?
> > 
> > Many of the kernel panics for the patch sets in the past have been related
> > to fundamental architectural issues. For example ignoring things like
> > NUMA, mangling the free_list by accessing it with the wrong locks held,
> > etc.
> 
> Obviously we didn't know it earlier, whatever tests I had tried I didn't see
> any issues with them.
> Again, I am trying to learn from my mistakes and appreciate you helping
> me out with that.

I understand that. However I have been burned a few times by this now so I
feel it is valid to point out that there have been ongoing issues such as
this when doing a comparison. Especially when the complaint is that my
approach is "fragile".

> > > d) Performance: We want to see a design that fits into the whole 
> > > architecture cleanly, is maintainable, and provides a benefit. Of 
> > > course, performance is relevant, but it certainly should not dictate our 
> > > design of a *virtualization specific optimization feature*. Performance 
> > > is not everything, otherwise please feel free and rewrite the kernel in 
> > > ASM and claim it is better because it is faster.
> > I agree performance is not everything. But when a system grinds down to
> > 60% of what it was originally I find that significant.
> 
> 60%? In one of your previous emails you suggested that the drop was 30%.

I was rounding down in both cases by basically just dropping the last
digit for 0. As I recall it was something in the mid 30s that I was seeing
for a drop. I was being a bit more generous before, but I was feeling a
bit irritable so I flipped things and rounded the percentage it retained
down.

> > > Again, I do value your review and feedback, but I absolutely do not 
> > > enjoy the way you are trying to push your series here, sorry.
> > Well I am a bit frustrated as I have had to provide a significant amount
> > of feedback on Nitesh's patches, and in spite of that I feel like I am
> > getting nothing in return.
> 
> Not sure if I understood the meaning here. May I know what were you expecting?
> I do try to review your series and share whatever I can.

So for all the review feedback I have provided I feel like I haven't
gotten much back. I have had several iterations of the patch set that got
almost no replies. Then on top of it instead of getting suggestions on how
to improve my patch set what ends up happening at some point is that your
patch set gets brought up and muddies the waters since we start discussing
the issues with it instead of addressing issues in my patch set.

> >  I have pointed out the various issues and
> > opportunities to address the issues. At this point there are sections of
> > his code that are directly copied from mine[1]. 
> 
> I don't think there is any problem with learning from you or your code.
> Is there?

I don't have any problems with you learning from my code, and the fact is
it shows we are making progress since at this point the virtio-balloon
bits are now essentially identical. The point that I was trying to make is
that I have been contributing to the progress that has been made.

> As far as giving proper credit is concerned, that was my mistake and I intend
> to correct it as I have already mentioned.

I appreciate that.

> > I have done everything I
> > can to help the patches along but it seems like they aren't getting out of
> > RFC or proof-of-concept state any time soon. 
> 
> So one reason for that is definitely the issues you pointed out.
> But also since I started working on this project, I kept getting different design
> suggestions. Which according to me is excellent. However, adopting them and
> making those changes could be easy for you but I have to take my time to
> properly understand them before implementing them.

Understood. In my case I am suffering from the opposite problem. I am
taking the input and iterating pretty quickly at this point. However I
haven't been getting much feedback. Thus my frustration when the patches
are applied and then people start wanting them reverted until they can be
tested against your patch set.

> > So with that being the case
> > why not consider his patch set as something that could end up being a
> > follow-on/refactor instead of an alternative to mine?
> > 
> 
> I have already mentioned that I would like to see the solution which is better
> and has a consensus (It doesn't matter from where it is coming from).

My primary focus is getting a solution in place. I'm not sure if
management has changed much at Red Hat since I was there but usually there
is a push to get things completed is there not? In my case I would like to
switch from development to maintenance for memory hinting, and if you end
up with a better approach I would be more than open to refactoring it
later and/or throwing out my code to replace it with yours.

There is a saying "Perfection is the enemy of progress." My concern is
that we are spending so much time trying to find the perfect solution we
aren't going to ever come up with one. After all the concept has been
around since 2011 and we are still debating how to go about implementing
it.






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux