Re: livepatch callback features

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri 2019-02-15 11:58:33, Nicolai Stange wrote:
> Petr Mladek <pmladek@xxxxxxxx> writes:
> 
> > On Thu 2019-02-14 11:47:42, Nicolai Stange wrote:
> >> Joe Lawrence <joe.lawrence@xxxxxxxxxx> writes:
> > I see the following advantages of the dynamic approach:
> >
> >      + easy to transfer (it just stays in the memory, some list)
> >
> >      + Localized code (alloc/get only where needed, no need to
> >        define it in some global structures, lists).
> >
> >      + One structure for a single feature all the time. No confusion
> >        which structure you work with. Temporary states might always
> >        be stored in local static variables and stored into
> >        the structure once the feature is fully migrated.
> >
> >        I could imagine that some people might see this as
> >        a disadvantage though.
> >
> >
> >
> > The disadvantages are:
> >
> >     + allocation might fail (must be done in pre_patch())
> >
> >     + more complicated compatibility check (another callback
> >       to check provided features)
> 
> 
> Ok. The only thing which remains unclear to me is how a live patch
> announces the set of supported features to the klp core?

We could add a callback is_feature_supported(). Something like:

bool is_feature_supported(int id, int version)
{
	if (id == L1TF && version <= 1)
		return true;

	if (id == BLA_BLA && version <= 2)
		return true;

	return false;
}

The livepatch core need to have somewhere the list
of existing features (to implement klp_get_feature()).
It might check if all the existing features are supported
by calling the above callback from the new patch.

Any better idea is welcome.


> >> > - For follow on livepatches, would their feature set need to match or
> >> > exceed the values for all of the current features, or only the
> >> > features that the new patch is advertising to support?  For example,
> >> > if we have current features:
> >> >
> >> >   vmlinux : { { L1TF, v1 }, { CVE2, v2 } }
> >> >
> >> > should the incoming cumulative patch need L1TF >= v1 and CVE2 >= v2 ?
> >> > Or does it only need L1TF >= v1 *or* CVE2 >= v2?
> >> >
> >> > Dunno if cumulative livepatches change that requirement or not.
> >> 
> >> I have to revisit the semantics of non-cumulative live patches
> >> after atomic replace, but I think that if the one to get applied is
> >> cumulative, it's an "and" and for those which are non-cumulative it
> >> could perhaps be an "or".
> >
> > Good question. Well, I think that non-cumulative patches will not
> > need to check anything. They are primary used to add unrelated fixes.
> > They might theoretically take over or extend an existing feature
> > but then the authors/users is on their own shifting sands.
> 
> Hmm, I think we could require the versions to be >= what is currently
> there, but only for the features the new non-cumulative live patch
> supports?

Good point.

> I.e, for cumulative live patches, it would be: "all features currently
> registered must be provided by a new cumulative live patch in the same
> or higher version".
> 
> And for non-cumulative ones: "all features supported by a new
> non-cumulative live patch must be provided in the same or higher version
> than what's currently registered on the system".

This makes perfect sense.


> >> Which brings another question: similar to the "sticky" case preventing
> >> unpatching, we probably shouldn't allow the the transition to get
> >> reversed back to live patches not providing a particular feature at all
> >> or only in a lower version?
> >> 
> >> Perhaps it would make sense to make stickiness a klp_feature property?
> >
> > It might be unclear with patch is sticky with the dynamically
> > allocated structures. It might work with dynamically allocated
> > structures.
> >
> > The question is if it is worth the complexity. It is a global
> > state. I wonder if any callback would ever want to disable
> > the sticky flag once set.
> >
> 
> I don't care too much whether or not the stickiness is managed per-patch
> or or'ed together from individual features.
> 
> But I think what is important is how the feature versioning affects
> the reversability of transitions.
> 
> That is, as we won't allow a transision to a sticky patch to be
> reversed, we probably also don't want to allow a transition to a patch
> with a feature of a higher version to be reversed -- just to be
> consistent there?

It depends. The new livepatch must be able to migrate the feature
to a higher version. In many cases, the backward migration might
be easy, especially when the real work is done in post_patch()
callbacks.

We could implement:

   void klp_disable_revert(void)

that might get called when the revert is not supported.

I could imagine that some vendors would always call this just to make
their life easier.


> >> Let me throw in a half-baked idea/dream of mine completely unrelated to
> >> changing global semantics, safe upgrade paths and so on.
> >> 
> >> >From time to time, the upstream mitigation for a certain CVE involves
> >> new control knobs. For example, in the L1TF kvm case, there had been a
> >> new option (whose name I forgot) to disable hyperthreading. The live
> >> patch author currently has no standardized way of providing such a
> >> control knob to live patching users and thus, all he/she can do about it is
> >> to apply the somewhat coarse policy of "security over performance".
> >> There are other examples for similar cases, which I forgot about now.
> >> 
> >> Long story short, it would be nice if we could, for certain subpatches
> >> or "features" or however we call it, ask the user to enable them
> >> explictly through some /sys/.../enable file.
> >
> > Huh, this is another level of complexity.
> 
> Indeed :)
> 
> Ignore it, I just wanted to remark that static klp_features could have
> the potential to get reused/extended in the future. But that should not
> concern us now -- aplogies for the confusion.
> 
> <digression>
> 
> > I wonder what are the problems to create a /sys interface that might
> > get trasfered between livepatches.
> 
> Ok, in the L1TF kvm example, there's none.
> 
> But there are cases where I'd prefer to introduce my own, live-patch
> specific control knob not having any equivalent in upstream.  Take the
> mincore() patches currently under discussion. They slightly change
> behaviour and it would be nice imo if the users could opt-in for that.

I see.

> And it would be great, if this delayed enabling of the mincore()
> live-subpatch could happen with full consistency-model guarantees:
> writing '1' to
> /sys/.../foo-patch/mincore-cve-feature/enable
> would trigger a full transition for the sub-patch instead of only
> flipping some boolean flag.

Huh, as I said, this is another level of complexity. And I am not
convinced that it is even reasonable:

    + If the full kernel update does not allow to disable a feature.
      Why a livepatch should be more clever? It might always get
      fixed by the next livepatch.

    + Most livepatch fixes "just copy" the updated upstream
      code. Any livepatch specific-code might add livepatch-specific
      bugs. Livepatch users are very conservative. So should be
      the livepatch authors.

Best Regards,
Petr

PS: I hope that this shows my view also on all Joe's questions.
Feel free to ask if I missed anything.



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux Kernel]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux