Re: [RFC PATCH 0/4] CXL Hotness Monitoring Unit perf driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 21 Nov 2024 10:18:41 +0000
Jonathan Cameron <Jonathan.Cameron@xxxxxxxxxx> wrote:

> The CXL specification release 3.2 is now available under a click through at
> https://computeexpresslink.org/cxl-specification/ and it brings new
> shiny toys.

If anyone wants to play, basic emulation on my CXL QEMU staging tree
https://gitlab.com/jic23/qemu/-/commit/e89b35d264c1bcc04807e7afab1254f35ffc8cb9

Branch with a few other things on top is:
https://gitlab.com/jic23/qemu/-/commits/cxl-2024-11-27

Note that this currently doesn't produce real data.  I have a plan
/ initial PoC / hack to hook that up via an addition to the QEMU cache
plugin and an external tool to emulate the hotness tracker counting
hardware. Will be a little while before I get that finished, so in
a meantime the above exercises the driver.

Jonathan
 
> 
> RFC reason
> - Whilst trace capture with a particular configuration is potentially useful
>   the intent is that CXL HMU units will be used to drive various forms of
>   hotpage migration for memory tiering setups. This driver doesn't do this
>   (yet), but rather provides data capture etc for experimentation and
>   for working out how to mostly put the allocations in the right place to
>   start with by tuning applications.
> 
> CXL r3.2 introduces a CXL Hotness Monitoring Unit definition. The intent
> of this is to provide a way to establish which units of memory (typically
> pages or larger) in CXL attached memory are hot. The implementation details
> and algorithm are all implementation defined. The specification simply
> describes the 'interface' which takes the form of ring buffer of hotness
> records in a PCI BAR and defined capability, configuration and status
> registers.
> 
> The hardware may have constraints on what it can track, granularity etc
> and on how accurately it tracks (e.g. counter exhaustion, inaccurate
> trackers). Some of these constraints are discoverable from the hardware
> registers, others such as loss of accuracy have no universally accepted
> measures as they are typically access pattern dependent. Sadly it is
> very unlikely any hardware will implement a truly precise tracker given
> the large resource requirements for tracking at a useful granularity.
> 
> There are two fundamental operation modes:
> 
> * Epoch based. Counters are checked after a period of time (Epoch) and
>   if over a threshold added to the hotlist.
> * Always on. Counters run until a threshold is reached, after that the
>   hot unit is added to the hotlist and the counter released.
> 
> Counting can be filtered on:
> 
> * Region of CXL DPA space (256MiB per bit in a bitmap).
> * Type of access - Trusted and non trusted or non trusted only, R/W/RW
> 
> Sampling can be modified by:
> 
> * Downsampling including potentially randomized downsampling.
> 
> The driver presented here is intended to be useful in its own right but
> also to act as the first step of a possible path towards hotness monitoring
> based hot page migration. Those steps might look like.
> 
> 1. Gather data - drivers provide telemetry like solutions to get that
>    data. May be enhanced, for example in this driver by providing the
>    HPA address rather than DPA Unit Address. Userspace can access enough
>    information to do this so maybe not.
> 2. Userspace algorithm development, possibly combined with userspace
>    triggered migration by PA. Working out how to use different levels
>    of constrained hardware resources will be challenging.
> 3. Move those algorithms in kernel. Will require generalization across
>    different hotpage trackers etc.
> 
> So far this driver just gives access to the raw data. I will probably kick
> of a longer discussion on how to do adaptive sampling needed to actually
> use these units for tiering etc, sometime soon (if no one one else beats
> me too it).  There is a follow up topic of how to virtualize this stuff
> for memory stranding cases (VM gets a fixed mixture of fast and slow
> memory and should do it's own tiering).
> 
> More details in the Documentation patch but typical commands are:
> 
> $perf record -a  -e cxl_hmu_mem0.0.0/epoch_type=0,access_type=6,\
>  hotness_threshold=1024,epoch_multiplier=4,epoch_scale=4,range_base=0,\
>  range_size=1024,randomized_downsampling=0,downsampling_factor=32,\
>  hotness_granual=12
> 
> $perf report --dump-raw-traces
> 
> Example output.  With a counter_width of 16 (0x10) the least significant
> 4 bytes are the counter value and the unit index is bits 16-63.
> Here all units are over the threshold and the indexes are 0,1,2 etc.
> 
> . ... CXL_HMU data: size 33512 bytes
> Header 0: units: 29c counter_width 10
> Header 1 : deadbeef
> 0000000000000283
> 0000000000010364
> 0000000000020366
> 000000000003033c
> 0000000000040343
> 00000000000502ff
> 000000000006030d
> 000000000007031a
> 
> Which will produce a list of hotness entries.
> Bits[N-1:0] counter value
> Bits[63:N] Unit ID (combine with unit size and DPA base + HDM decoder
>   config to get to a Host Physical Address)
> 
> Specific RFC questions.
> - What should be in the header added to the aux buffer.
>   Currently just the minimum is provided. Number of records
>   and the counter width needed to decode them.
> - Should we reset the counters when doing sampling "-F X"
>   If the frequency is higher than the epoch we never see any hot units.
>   If so, when should we reset them?
> 
> Note testing has been light and on emulation only + as perf tool is
> a pain to build on a striped back VM,  build testing has all be on
> arm64 so far.  The driver loads though on both arm64 and x86 so
> any problems are likely in the perf tool arch specific code
> which is build tested (on wrong machine)
> 
> The QEMU emulation needs some cleanup, but I should be able to post
> that shortly to let people actually play with this.  There are lots
> of open questions there on how 'right' we want the emulation to be
> and what counting uarch to emulate.
> 
> Jonathan Cameron (4):
>   cxl: Register devices for CXL Hotness Monitoring Units (CHMU)
>   cxl: Hotness Monitoring Unit via a Perf AUX Buffer.
>   perf: Add support for CXL Hotness Monitoring Units (CHMU)
>   hwtrace: Document CXL Hotness Monitoring Unit driver
> 
>  Documentation/trace/cxl-hmu.rst     | 197 +++++++
>  Documentation/trace/index.rst       |   1 +
>  drivers/cxl/Kconfig                 |   6 +
>  drivers/cxl/Makefile                |   3 +
>  drivers/cxl/core/Makefile           |   1 +
>  drivers/cxl/core/core.h             |   1 +
>  drivers/cxl/core/hmu.c              |  64 ++
>  drivers/cxl/core/port.c             |   2 +
>  drivers/cxl/core/regs.c             |  14 +
>  drivers/cxl/cxl.h                   |   5 +
>  drivers/cxl/cxlpci.h                |   1 +
>  drivers/cxl/hmu.c                   | 880 ++++++++++++++++++++++++++++
>  drivers/cxl/hmu.h                   |  23 +
>  drivers/cxl/pci.c                   |  26 +-
>  tools/perf/arch/arm/util/auxtrace.c |  58 ++
>  tools/perf/arch/x86/util/auxtrace.c |  76 +++
>  tools/perf/util/Build               |   1 +
>  tools/perf/util/auxtrace.c          |   4 +
>  tools/perf/util/auxtrace.h          |   1 +
>  tools/perf/util/cxl-hmu.c           | 367 ++++++++++++
>  tools/perf/util/cxl-hmu.h           |  18 +
>  21 files changed, 1748 insertions(+), 1 deletion(-)
>  create mode 100644 Documentation/trace/cxl-hmu.rst
>  create mode 100644 drivers/cxl/core/hmu.c
>  create mode 100644 drivers/cxl/hmu.c
>  create mode 100644 drivers/cxl/hmu.h
>  create mode 100644 tools/perf/util/cxl-hmu.c
>  create mode 100644 tools/perf/util/cxl-hmu.h
> 





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux