On 5/14/21 9:02 AM, Jason Gunthorpe wrote:
On Thu, May 13, 2021 at 03:31:48PM -0400, Dennis Dalessandro wrote:
On 5/13/21 3:15 PM, Jason Gunthorpe wrote:
On Thu, May 13, 2021 at 03:03:43PM -0400, Dennis Dalessandro wrote:
On 5/12/21 8:50 AM, Leon Romanovsky wrote:
On Wed, May 12, 2021 at 12:25:15PM +0000, Marciniszyn, Mike wrote:
Thanks Leon, we'll get this put through our testing.
Thanks a lot.
The patch as is passed all our functional testing.
Thanks Mike,
Can I ask you to perform a performance comparison between this patch and
the following?
We have years of performance data with the code the way it is. Please
maintain the original functionality of the code when moving things into the
core unless there is a compelling reason to change. That is not the case
here.
Well, making the core do node allocations for metadata on every driver
is a pretty big thing to ask for with no data.
Can't you just make the call into the core take a flag for this? You are
looking to make a change to key behavior without any clear reason that I can
see for why it needs to be that way. If there is a good reason, please
explain so we can understand.
The lifetime model of all this data is messed up, there are a bunch of
little bugs on the error paths, and we can't have a proper refcount
lifetime module when this code really wants to have it.
IMHO if hf1 has a performance need here it should chain a sub
allocation since promoting node awareness to the core code looks
not nice..
That's part of what I want to understand. Why is it "not nice"? Is it
because there is only 1 driver that needs it or something else.
As far as chaining a sub allocation, I'm not sure I follow. Isn't that
kinda what Leon is doing here? Or will do, in other words move the qp
allocation to the core and leave the SGE allocation in the driver per
node. I can't say for any certainty one way or the other this is OK. I
just know it would really suck to end up with a performance regression
for something that was easily avoided by not changing the code behavior.
A regression in code that has been this way since day 1 would be really
bad. I'd just really rather not take that chance.
These are not supposed to be performance sensitive data structures,
they haven't even been organized for cache locality or anything.
I would think the person authoring the patch should be responsible to prove
their patch doesn't cause a regression.
I'm more interested in this argument as it applied to functional
regressions. Performance is always shifting around and a win for a
node specific allocation seems highly situational to me. I half wonder
if all the node allocation in this driver is just some copy and
paste.
I think prove is too strong of a word. Should have said do what is
reasonably necessary to ensure their patch doesn't cause a regression.
Whether that's running their own tests or taking the advice from the
folks who wrote the initial code or even other non-biased review
opinions, etc. I certainly don't expect Leon to throw some HFIs in a
machine and do a performance evaluation.
I think this is the exact opposite of copy/paste. When we wrote this
code originally there was a ton of work that went into how data
structures were aligned and organized, as well as an examination of
allocations and per node allocations were found to be important. If you
look at the original qib code in v4.5, before we did rdmavt, the
allocation was not per node. We specifically changed that in v4.6 when
we put in rdmavt. In v4.3 when hfi1 went into staging it was not using
the per node variant either (because that was copy/paste).
I would love to be able to go back in our code reviews and bug tracking
and tell you exactly why this line of code was changed to be per node.
Unfortunately that level of information has not passed on to Cornelis.
-Denny