Re: [RFC] GCC support for live-patching

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Qing Zhao <qing.zhao@xxxxxxxxxx> writes:

>> 
>> thanks for the proposal. The others have already expressed some of my 
>> worries and remarks, but I think it would be only right to write them 
>> again. Especially since I am part of the team responsible for 
>> implementation and maintenance of live patches here at SUSE, we use kGraft 
>> and we prepare everything manually (compared to kpatch and ksplice).
>
> One question here,  what’s the major benefit to prepare the patches manually? 

There is none. We here at SUSE prefer the source based approach (as
opposed to binary diff) for a number of reasons and the manual live
patch creation is simply a consequence of not having any tooling for
this yet.


For reference, source based live patch creation involves the following
steps:

1. Determine the initial set of to be patched functions:
   a.) Inspect the upstream diff for the fix in question, add
       any touched functions to the initial set.
   b.) For each function in the initial set, check whether it has been
       inlined/cloned/optimized and if so, add all its callers to the
       initial set.  Repeat until the initial set has stabilized.

2. Copy & paste the initial set over to the new live patch sources.

3. Make it compile, i.e. recursively copy any needed cpp macro, type, or
   functions definition and add references to data objects with static
   storage duration.
   The rules are:
   a.) For data objects with static storage duration, a reference to the
       original must always be made. (If the symbol is EXPORT()ed, then
       fine. Otherwise, for kGraft, this involves a kallsyms lookup at
       patch module load time, for upstream kernel live patching, this
       has been solved with those '.klp....' relocations).
   b.) If a called function is available as a symbol from either vmlinux
       or some (usually the patched) module, do not copy the definition,
       but add a reference to it, just as in a.).
   c.) If a type, cpp macro or (usually inlined) function is provided by
       some "public" header in <linux-tree>/include/, include that
       rather than copying the definition.  Counterexample: Non-public
       header outside of include/ like
       e.g. <linux-tree>/fs/btrfs/qgroup.h.
   d.) Otherwise copy the definition to the live patch module sources.

Rule 3b is not strictly necessary, but it helps in reducing the live
patch code size which is a factor with _manual_ live patch creation.

For 1b.), we need help from GCC. Namely, we want to know when some
functions has been optimized and we want it to disable any of those IPA
optimization it (currently) isn't capable to report properly.

Step 3.) is a bit tedious sometimes TBH and yes, w/o any tooling in
place, patch size would be a valid point. However, I'm currently working
on that and I'm optimistic that I'll have a working prototype soon.

That tool would be given the GCC command line from the original or "live
patch target" kernel compilation for the source file in question, the
set of functions as determined in 1.) and a number of user provided
filter scripts to make the decisions in 3.). As a result, it would
output a self-contained, minimal subset of the original kernel sources.

With that tooling in place, live patch code size would not be a real
concern for us.

So in conclusion, what we need from GCC is the information on when we
have to live patch callers due to optimizations. If that's not possible
for a particular class of optimization, it needs to be disabled.

OTOH, we definitely want to keep the set of these disabled optimizations
as small as possible in order to limit the impact of live patching on
kernel performance. In particular, disabling any of the "cloning"
optimizations, which GCC is able to report properly, would be a
no-no IMO.

IIUC, our preferred selection of allowed IPA optimizations would be
provided by what you are referring to as "-flive-patching=inline-clone".



>> 
>>> 1. A study of Kernel live patching schemes.
>>> 
>>> Three major kernel live patching tools:  https://lwn.net/Articles/734765/
>>> 
>>> * ksplice:   http://www.ksplice.com/doc/ksplice.pdf
>>> * kpatch:    https://lwn.net/Articles/597123/
>>>            https://github.com/dynup/kpatch
>>> * kGraft:    
>>> https://pdfs.semanticscholar.org/presentation/af4c/895aa3fef0cc2b501317aaec9d91ba2d704c.pdf
>>> 
>>> In the above, ksplice and kpatch can automatically generate binary patches 
>>> as following:
>>> 
>>>   * a collection of tools which convert a source diff patch to a patch
>>> module. They work by compiling the kernel both with and without the source
>>> patch, comparing the binaries, and generating a binary patch module which 
>>> includes new binary versions of the functions to be replaced.
>>> 
>>> on the other hand, kGraft offers a way to create patches entirely by hand. 
>>> The source of the patch is a single C file, easy to review, easy to
>>> maintain. 
>>> 
>>> In addition to kGraft, there are other live patching tools that prefer
>>> creating patches by hand for the similar reason. 

Out of curiosity: which? Upstream kernel live patching?



>>> The compiler support is mainly for the above live patching tools that create 
>>> patches entirely by hand. the major purpose is:
>>> 
>>> * control patch code size and debug complexity;
>>> * keep good run time performance;
>>> 
>>> 2. the major problems of compiler in live patching:
>>> 
>>> For the live patching schemes that create patches by hand, when patching 
>>> one function, there might a list of functions that will be impacted by 
>>> this patched function due to compiler optimization/analyses (mainly IPA
>>> optimization/analyses), a complete patch will include the patched function
>>> and all impacted functions. Usually, there are two major factors to be
>>> considered in such live patching schemes:
>>> 
>>> * patch code size, one major factor is the length of the list 
>>> of impacted functions;
>>> * run time performance.
>>> 
>>> If we want to control the patch code size, to make the list of impacted 
>>> functions minimum, we have to disable corresponding compiler optimizations 
>>> as much as possible.
>> 
>> Andi already talked about it and I, too, do not understand your worry 
>> about patch code size. First, it has never been so bad here. Yes, 
>> sometimes the function closure gets bigger due to optimizations and 
>> inlining. I've considered it as nothing else than a lack of better 
>> tooling, because it is indeed something which could be improved a lot. 
>> Nicolai (CCed) works on a potential solution. It is also one of the topics 
>> at LPC miniconf in Vancouver.
>> 
>> Second, the idea to disable inlining would not fly at SUSE. I can't 
>> imagine to even propose it. The kernel heavily relies on the feature. The 
>> optimizations are a different story and some of those certainly could be 
>> disabled with no harm caused.
>> 
>> So let me ask, what is your motivation behind this? Is there a real 
>> problem you're trying to solve? It may have been mentioned somewhere and I 
>> missed it.
>
> the major functionality we want is:   to Only enable static inlining for live patching for one 
> of our internal customers.   the major purpose is to control the patch code size explosion and
> debugging complexity due to too much inlining of global functions for the specific application.
>
> therefore, I proposed the multiple level of control for -flive-patching to satisfy multiple request from 
> different users. 
>
> So far, from the feedback, I see that among the 4 levels of control,   none, only-inline-static, inline,
> and inline-clone,   “none” and “inline” are NOT needed at all.
>
> however,  -flive-patching = [only-inline-static | inline-clone] are
> necessary.

It would be interesting to learn why that internal customer is so keen
about live patch code size? I mean either their live patch creation is
somehow automated or not. In the former case, the extra .text of
inline-clone over only-inline-static would be handled by tooling anyway
and thus, wouldn't cost any additional working hours and be highly
unlikely to introduce (new) regressions to be debugged. In the latter
case, wouldn't improving the tooling also benefit the only-inline-static
case?

Thanks,

Nicolai

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux Kernel]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux