[PATCH v3 00/19] arm64 kernel text replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: root <root@xxxxxxxxxxxxxxxxxxxxxx>

Many thanks to Russell King for his previous work on
arm64 kernel text replication.
https://lore.kernel.org/all/ZMKNYEkM7YnrDtOt@xxxxxxxxxxxxxxxxxxxxx

After applying these patches, we tested that our business performance
increased by more than 5% and the NUMA node memory bandwidth was more
balanced.
I've recently been trying to make it work with different numbers of
page tables/page sizes, so updated this patch set to V3.

Patch overview:

Patch 1-16 is a patch set based on Russell King's previous arm64
kernel text replication, rebased on commit 052d534373b7.

The following three patches are new in v3:
patch 17 fixes compilation warning

patch 18 adapts arm64 kernel text replication to support more
page tables/page sizes, in addition to 16K page size and
4-level page tables.

patch 19 fixes the abnormal startup problem caused by module_alloc()
which may allocate an address larger than KIMAGE_VADDR when kernel text
replication is enabled.

[v2] https://lore.kernel.org/all/ZMKNYEkM7YnrDtOt@xxxxxxxxxxxxxxxxxxxxx
[RFC] https://lore.kernel.org/all/ZHYCUVa8fzmB4XZV@xxxxxxxxxxxxxxxxxxxxx

Please correct me if I've made a mistake, thank you very much!

Original message below.

Problem
-------

NUMA systems have greater latency when accessing data and instructions
across nodes, which can lead to a reduction in performance on CPU cores
that mainly perform accesses beyond their local node.

Normally when an ARM64 system boots, the kernel will end up placed in
memory, and each CPU core will have to fetch instructions and data from
which ever NUMA node the kernel has been placed. This means that while
executing kernel code, CPUs local to that node will run faster than
CPUs in remote nodes.

The higher the latency to access remote NUMA node memory, the more the
kernel performance suffers on those nodes.

If there is a local copy of the kernel text in each node's RAM, and
each node runs the kernel using its local copy of the kernel text,
then it stands to reason that the kernel will run faster due to fewer
stalls while instructions are fetched from remote memory.

The question then arises how to achieve this.

Background
----------

An important issue to contend with is what happens when a thread
migrates between nodes. Essentially, the thread's state (including
instruction pointer) is saved to memory, and the scheduler on that CPU
loads some other thread's state and that CPU resumes executing that
new thread.

The CPU gaining the migrating thread loads the saved state, again
including the instruction pointer, and the gaining CPU resumes fetching
instructions at the virtual address where the original CPU left off.

The key point is that the virtual address is what matters here, and
this gives us a way to implement kernel text replication fairly easily.
At a practical level, all we need to do is to ensure that the virtual
addresses which contain the kernel text point to a local copy of the
that text.

This is exactly how this proposal of kernel text replication achieves
the replication. We can go a little bit further and include most of
the read-only data in this replication, as that will never be written
to by the kernel (and thus remains constant.)

Solution
--------

So, what we need to achieve is:

1. multiple identical copies of the kernel text (and read-only data)
2. point the virtual mappings to the appropriate copy of kernel text
   for the NUMA node.

(1) is fairly easy to achieve - we just need to allocate some memory
in the appropriate node and copy the parts of the kernel we want to
replicate. However, we also need to deal with ARM64's kernel patching.
There are two functions that patch the kernel text,
__apply_alternatives() and aarch64_insn_patch_text_nosync(). Both of
these need to to be modified to update all copies of the kernel text.

(2) is slightly harder.

Firstly, the aarch64 architecture has a very useful feature here - the
kernel page tables are entirely separate from the user page tables.
The hardware contains two page table pointers, one is used for user
mappings, the other is used for kernel mappings.

Therefore, we only have one page table to be concerned with: the table
which maps kernel space. We do not need to be concerned with each
user processes page table.

The approach taken here is to ensure that the kernel is located in an
area of kernel virtual address space covered by a level-0 page table
entry which is not shared with any other user. We can then maintain
separate per-node level-0 page tables for kernel space where the only
difference between them is this level-0 page table entry.

This gives a couple of benefits. Firstly, when updates to the level-0
page table happen (e.g. when establishing new mappings) these updates
can simply be copied to the other level-0 page tables provided it isn't
for the kernel image. Secondly, we don't need complexity at lower
levels of the page table code to figure out whether a level-1 or lower
update needs to be propagated to other nodes.

The level-0 page table entry for the kernel can then be used to point
at a node-unique set of level 1..N page tables to make the appropriate
copy of the kernel text (and read-only data) into kernel space, while
keeping the kernel read-write data shared between nodes.

Performance Analysis
--------------------

Needless to say, the performance results from kernel text replication
are workload specific, but appear to show a gain of between 6% and
17% for database-centric like workloads. When combined with userspace
awareness of NUMA, this can result in a gain of over 50%.

Problems
--------

There are a few areas that are a problem for kernel text replication:
1) As this series changes the kernel space virtual address space
   layout, it breaks KASAN - and I've zero knowledge of KASAN so I
   have no idea how to fix it. I would be grateful for input from
   KASAN folk for suggestions how to fix this.

2) KASLR can not be used with kernel text replication, since we need
   to place the kernel in its own L0 page table entry, not in vmalloc
   space. KASLR is disabled when support for kernel text replication
   is enabled.

3) Changing the kernel virtual address space layout also means that
   kaslr_offset() and kaslr_enabled() need to become macros rather
   than inline functions due to the use of PGDIR_SIZE in the
   calculation of KIMAGE_VADDR. Since asm/pgtable.h defines this
   constant, but asm/memory.h is included by asm/pgtable.h, having
   this symbol available would produce a circular include
   dependency, so I don't think there is any choice here.

4) read-only protection for replicated kernel images is not yet
   implemented.

Hao Jia (3):
  arm64: text replication: fix compilation warning
  arm64: text replication: support more page sizes and levels
  arm64: text replication: keep modules inside module region when
    REPLICATE_KTEXT is enabled

Russell King (Oracle) (16):
  arm64: provide cpu_replace_ttbr1_phys()
  arm64: make clean_dcache_range_nopatch() visible
  arm64: place kernel in its own L0 page table entry
  arm64: text replication: add init function
  arm64: text replication: add sanity checks
  arm64: text replication: copy initial kernel text
  arm64: text replication: add node text patching
  arm64: text replication: add node 0 page table definitions
  arm64: text replication: add swapper page directory helpers
  arm64: text replication: create per-node kernel page tables
  arm64: text replication: boot secondary CPUs with appropriate TTBR1
  arm64: text replication: update cnp support
  arm64: text replication: setup page tables for copied kernel
  arm64: text replication: include most of read-only data as well
  arm64: text replication: early kernel option to enable replication
  arm64: text replication: add Kconfig

 .../admin-guide/kernel-parameters.txt         |   5 +
 arch/arm64/Kconfig                            |  10 +-
 arch/arm64/include/asm/cacheflush.h           |   2 +
 arch/arm64/include/asm/ktext.h                |  45 ++++
 arch/arm64/include/asm/memory.h               |  36 ++-
 arch/arm64/include/asm/mmu_context.h          |  11 +-
 arch/arm64/include/asm/pgtable.h              |  31 ++-
 arch/arm64/include/asm/smp.h                  |   1 +
 arch/arm64/kernel/alternative.c               |   4 +-
 arch/arm64/kernel/asm-offsets.c               |   1 +
 arch/arm64/kernel/head.S                      |   3 +-
 arch/arm64/kernel/hibernate.c                 |   2 +-
 arch/arm64/kernel/kaslr.c                     |   1 +
 arch/arm64/kernel/module.c                    |  20 +-
 arch/arm64/kernel/patching.c                  |   7 +-
 arch/arm64/kernel/smp.c                       |   3 +
 arch/arm64/mm/Makefile                        |   2 +
 arch/arm64/mm/init.c                          |   3 +
 arch/arm64/mm/ktext.c                         | 213 ++++++++++++++++++
 arch/arm64/mm/mmu.c                           |  73 +++++-
 20 files changed, 446 insertions(+), 27 deletions(-)
 create mode 100644 arch/arm64/include/asm/ktext.h
 create mode 100644 arch/arm64/mm/ktext.c

-- 
2.20.1





[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux