Re: + mm-introduce-reported-pages.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/7/19 1:02 PM, Alexander Duyck wrote:
> On Thu, 2019-11-07 at 00:33 +0100, David Hildenbrand wrote:
>> On 06.11.19 18:48, Alexander Duyck wrote:
>>> On Wed, 2019-11-06 at 17:54 +0100, Michal Hocko wrote:
>>>> On Wed 06-11-19 08:35:43, Alexander Duyck wrote:
>>>>> On Wed, 2019-11-06 at 15:09 +0100, David Hildenbrand wrote:
>>>>>>> Am 06.11.2019 um 13:16 schrieb Michal Hocko <mhocko@xxxxxxxxxx>:
>>>>>>>
>>>>>>> I didn't have time to read through newer versions of this patch series
>>>>>>> but I remember there were concerns about this functionality being pulled
>>>>>>> into the page allocator previously both by me and Mel [1][2]. Have those been
>>>>>>> addressed? I do not see an ack from Mel or any other MM people. Is there
>>>>>>> really a consensus that we want something like that living in the
>>>>>>> allocator?
>>>>>> I don‘t think there is. The discussion is still ongoing (although quiet,
>>>>>> Nitesh is working on a new version AFAIK). I think we should not rush
>>>>>> this.
>>>>> How much time is needed to get a review? I waited 2 weeks since posting
>>>>> v12 and the only comments I got on the code were from Andrew. Most of this
>>>>> hasn't changed much since v10 and that was posted back in mid September. I
>>>>> have been down to making small tweaks here and there and haven't had any
>>>>> real critiques on the approach since Mel had the comments about conflicts
>>>>> with compaction which I addressed by allowing compaction to punt the
>>>>> reporter out so that it could split and splice the lists as it walked
>>>>> through them.
>>>> Well, people are busy and MM community is not a large one. I cannot
>>>> really help you much other than keep poking those people and give
>>>> reasonable arguments so they decide to ack your patch.
>>> I get that. But v10 was posted in mid September. Back then we had a
>>> discussion about addressing what Mel had mentioned and I had mentioned
>>> then that I had addressed it by allowing compaction to essentially reset
>>> the reporter to get it out of the list so compaction could do this split
>>> and splice tumbling logic.
>>>
>>>> I definitely do not intent to nack this work, I just have maintainability
>>>> concerns and considering there is an alternative approach that does not
>>>> require to touch page allocator internals and which we need to compare
>>>> against then I do not really think there is any need to push something
>>>> in right away. Or is there any pressing reason to have this merged right
>>>> now?
>>> The alternative approach doesn't touch the page allocator, however it
>>> still has essentially the same changes to __free_one_page. I suspect the
>> Nitesh is working on Michals suggestion to use page isolation instead 
>> AFAIK - which avoids this.
> Okay. However it makes it much harder to discuss when we are comparing
> against code that isn't public. If the design is being redone do we have
> any ETA for when we will have something to actually compare to?

If there would have been just the design change then giving definite ETA would
have been possible.
However, I also have to fix the performance with (MAX_ORDER - 2). Unlike you,
I need some time to do that.
If I just post the code without fixing the performance there will again be an
unnecessary discussion about the same thing which doesn't make any sense.


>
>>> performance issue seen is mostly due to the fact that because it doesn't
>>> touch the page allocator it is taking the zone lock and probing the page
>>> for each set bit to see if the page is still free. As such the performance
>>> regression seen gets worse the lower the order used for reporting.
>>>
>>> Also I suspect Nitesh's patches are also in need of further review. I have
>>> provided feedback however my focus ended up being on more the kernel
>>> panics and 30% performance regression rather than debating architecture.
>> Please don't take this personally, but I really dislike you taking about 
>> Niteshs RFCs (!) and pushing for your approach (although it was you that 
>> was late to the party!) in that way. If there are problems then please 
>> collaborate and fix instead of using the same wrong arguments over and 
>> over again.
> Since Nitesh is in the middle of doing a full rewrite anyway I don't have
> much to compare against except for the previous set, which still needs
> fixes.  It is why I mentioned in the cover of the last patch set that I
> would prefer to not discuss it since I have no visibility into the patch
> set he is now working on.


Fair point.


>
>> a) hotplug/sparse zones: I explained a couple of times why we can ignore 
>> that. There was never a reply from you, yet you keep coming up with 
>> that. I don't enjoy talking to a wall.
> This gets to the heart of how Nitesh's patch set works. It is assuming
> that every zone is linear, that there will be no overlap between zones,
> and that the zones don't really change. These are key architectural
> assumptions that should really be discussed instead of simply dismissed.


They are not at all dismissed, they are just kept as a future action item.

>
> I guess part of the difference between us is that I am looking for
> something that is production ready and not a proof of concept. It sounds
> like you would prefer this work stays in a proof of concept stage for some
> time longer.

In my opinion, it is more about how many use-cases do we want to target
initially.
With your patch-set, I agree we can cover more use-case where the solution
will fit in.
However, my series might not be suitable for use-cases where
we have memory hotplug or memory restriction. (This will be the case
after I fix the issues in the series)

>
>> b) Locking optimizations: Come on, these are premature optimizations and 
>> nothing to dictate your design. *nobody* but you cares about that in an 
>> initial approach we get upstream. We can always optimize that.
> My concern isn't so much the locking as the fact that it is the hunt and
> peck approach through a bitmap that will become increasingly more stale as
> you are processing the data. Every bit you have to test for requires
> taking a zone lock and then probing to see if the page is still free and
> the right size. My concern is how much time is going to be spent with the
> zone lock held while other CPUs are waiting on access.

This can be prevented (at least to an extent) by checking if the page is in
the buddy before acquiring the lock as I have suggested previously.

>
>> c) Kernel panics: Come on, we are talking about complicated RFCs here 
>> with moving design decisions. We want do discuss *design* and 
>> *architecture* here, not *implementation details*.
> Then why ask me to compare performance against it? You were the one
> pushing for me to test it, not me. If you and Nitesh knew the design
> wasn't complete enough to run it why ask me to test it?
>
> Many of the kernel panics for the patch sets in the past have been related
> to fundamental architectural issues. For example ignoring things like
> NUMA, mangling the free_list by accessing it with the wrong locks held,
> etc.


Obviously we didn't know it earlier, whatever tests I had tried I didn't see
any issues with them.
Again, I am trying to learn from my mistakes and appreciate you helping
me out with that.

>
>> d) Performance: We want to see a design that fits into the whole 
>> architecture cleanly, is maintainable, and provides a benefit. Of 
>> course, performance is relevant, but it certainly should not dictate our 
>> design of a *virtualization specific optimization feature*. Performance 
>> is not everything, otherwise please feel free and rewrite the kernel in 
>> ASM and claim it is better because it is faster.
> I agree performance is not everything. But when a system grinds down to
> 60% of what it was originally I find that significant.

60%? In one of your previous emails you suggested that the drop was 30%.

>
>> Again, I do value your review and feedback, but I absolutely do not 
>> enjoy the way you are trying to push your series here, sorry.
> Well I am a bit frustrated as I have had to provide a significant amount
> of feedback on Nitesh's patches, and in spite of that I feel like I am
> getting nothing in return.

Not sure if I understood the meaning here. May I know what were you expecting?
I do try to review your series and share whatever I can.

>  I have pointed out the various issues and
> opportunities to address the issues. At this point there are sections of
> his code that are directly copied from mine[1]. 

I don't think there is any problem with learning from you or your code.
Is there?
As far as giving proper credit is concerned, that was my mistake and I intend
to correct it as I have already mentioned.

> I have done everything I
> can to help the patches along but it seems like they aren't getting out of
> RFC or proof-of-concept state any time soon. 

So one reason for that is definitely the issues you pointed out.
But also since I started working on this project, I kept getting different design
suggestions. Which according to me is excellent. However, adopting them and
making those changes could be easy for you but I have to take my time to
properly understand them before implementing them.

> So with that being the case
> why not consider his patch set as something that could end up being a
> follow-on/refactor instead of an alternative to mine?
>

I have already mentioned that I would like to see the solution which is better
and has a consensus (It doesn't matter from where it is coming from).

>> Yes, if we end up finding out that there is real value in your approach, 
>> nothing speaks against considering it. But please don't try to hurry and 
>> push your series in that way. Please give everybody to time to evaluate.
> I would love to argue this patch set on the merits. However I really don't
> feel like I am getting a fair comparison here, at least from you. Every
> other reply on the thread seems to be from you trying to reinforce any
> criticism and taking the opportunity to mention that there is another
> solution out there. It is fine to fight for your own idea, but at least
> let me reply to the criticisms of my own patchset before you pile on. I
> would really prefer to discuss Nitesh's patches on a posting of his patch
> set rather than here.
>
> [1]: https://lore.kernel.org/lkml/101649ae-58d4-76ee-91f3-42ac1c145c46@xxxxxxxxxx/
>
>
-- 
Thanks
Nitesh






[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux