Re: [PATCH RESEND 3/4] acpi, hmat: calculate abstract distance with HMAT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Huang, Ying" <ying.huang@xxxxxxxxx> writes:

> Alistair Popple <apopple@xxxxxxxxxx> writes:
>
>> Huang Ying <ying.huang@xxxxxxxxx> writes:
>>
>>> A memory tiering abstract distance calculation algorithm based on ACPI
>>> HMAT is implemented.  The basic idea is as follows.
>>>
>>> The performance attributes of system default DRAM nodes are recorded
>>> as the base line.  Whose abstract distance is MEMTIER_ADISTANCE_DRAM.
>>> Then, the ratio of the abstract distance of a memory node (target) to
>>> MEMTIER_ADISTANCE_DRAM is scaled based on the ratio of the performance
>>> attributes of the node to that of the default DRAM nodes.
>>
>> The problem I encountered here with the calculations is that HBM memory
>> ended up in a lower-tiered node which isn't what I wanted (at least when
>> that HBM is attached to a GPU say).
>
> I have tested the series on a server machine with HBM (pure HBM, not
> attached to a GPU).  Where, HBM is placed in a higher tier than DRAM.

Good to know.

>> I suspect this is because the calculations are based on the CPU
>> point-of-view (access1) which still sees lower bandwidth to remote HBM
>> than local DRAM, even though the remote GPU has higher bandwidth access
>> to that memory. Perhaps we need to be considering access0 as well?
>> Ie. HBM directly attached to a generic initiator should be in a higher
>> tier regardless of CPU access characteristics?
>
> What's your requirements for memory tiers on the machine?  I guess you
> want to put GPU attache HBM in a higher tier and put DRAM in a lower
> tier.  So, cold HBM pages can be demoted to DRAM when there are memory
> pressure on HBM?  This sounds reasonable from GPU point of view.

Yes, that is what I would like to implement.

> The above requirements may be satisfied via calculating abstract
> distance based on access0 (or combined with access1).  But I suspect
> this will be a general solution.  I guess that any memory devices that
> are used mainly by the memory initiators other than CPUs want to put
> themselves in a higher memory tier than DRAM, regardless of its
> access0.

Right. I'm still figuring out how ACPI HMAT fits together but that
sounds reasonable.

> One solution is to put GPU HBM in the highest memory tier (with smallest
> abstract distance) always in GPU device driver regardless its HMAT
> performance attributes.  Is it possible?

It's certainly possible and easy enough to do, although I think it would
be good to provide upper and lower bounds for HMAT derived adistances to
make that easier. It does make me wonder what the point of HMAT is if we
have to ignore it in some scenarios though. But perhaps I need to dig
deeper into the GPU values to figure out how it can be applied correctly
there.

>> That said I'm not entirely convinced the HMAT tables I'm testing against
>> are accurate/complete.




[Index of Archives]     [Linux IBM ACPI]     [Linux Power Management]     [Linux Kernel]     [Linux Laptop]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]
  Powered by Linux