Re: livepatch callback features

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/14/19 5:47 AM, Nicolai Stange wrote:
Joe Lawrence <joe.lawrence@xxxxxxxxxx> writes:
- Do features need to be tightly coupled to callbacks?  They are a
convenient tool to implement a policy, but I wonder if we lose
anything by moving this up to the klp_patch level and letting the core
handle it. (Like the versioning scheme.)

If I'm understanding you correctly, you wonder whether having a
klp_object member in klp_feature would be necessary or not, right?

Yup.  Instead we could have an API call to get/set features.

I think it doesn't matter much in these "safe upgrade" scenarios
being discussed here.

I have to admit though that I don't see the need for having the
klp_features dynamic or "shadow variable like": I think we could embed
those statically into klp_patch as well.


Features could still be stored in a list and the callbacks could get
access to current vs. this patch's feature set.  They wouldn't be
dynamic, but then the API supporting that could drop out.

Right, so during a transition, access to klp_features from the previous
live patch would be possible only up to and including the pre-patch
callback, correct?

At least the ability to set them. Once we're past the pre-patch callback stage, reading maybe useful for some purpose, but not transition reversal.

>                    That would probably be enough to "take over" in all
situations. If the next live patch is sticky, ->data ownership wouldn't
be a problem, because, as Petr said in reply to the stickiness patch,
the transition should be non-reversable in this case. If we're dealing
with non-sticky live patches, the post-unpatch callback would only get
called if the live patch hasn't been superseeded by another one (thanks
to the atomic replace patchset) -- so we could teardown any ->data from
there.
Hmm, I haven't though much about a .data element. In a dynamic klp_features implementation, it would logically follow the klp_feature structure lifetime (I think). For static klp_features, I'm not so sure. Maybe it depends on stickiness as you say. BTW, what sort of payload do we envision coming along with .data?

- For follow on livepatches, would their feature set need to match or
exceed the values for all of the current features, or only the
features that the new patch is advertising to support?  For example,
if we have current features:

   vmlinux : { { L1TF, v1 }, { CVE2, v2 } }

should the incoming cumulative patch need L1TF >= v1 and CVE2 >= v2 ?
Or does it only need L1TF >= v1 *or* CVE2 >= v2?

Dunno if cumulative livepatches change that requirement or not.

I have to revisit the semantics of non-cumulative live patches
after atomic replace, but I think that if the one to get applied is
cumulative, it's an "and" and for those which are non-cumulative it
could perhaps be an "or".

I lean the same way, but admit that the cumulative atomic replace model is my mental default. I think the "or" case for non-cumulative patches makes sense, though I wonder how well that model scales after many patches are loaded.

Which brings another question: similar to the "sticky" case preventing
unpatching, we probably shouldn't allow the the transition to get
reversed back to live patches not providing a particular feature at all
or only in a lower version?

It's been a while since I was knee-deep in callback code, but I thought this was current behavior:

  pre-patch callback A - success
  pre-patch callback B - success
  pre-patch callback C - failure
  post-unpatch callback C' - skipped
  post-unpatch callback B' - called
  post-unpatch callback A' - called

(though the ordering may actually be A, B, C, A', B', IIRC)

if that is true, then wouldn't a version/feature "downgrade" via transition reversal be safe as long as A' and B' safely unwind A and B?

In reply to the stickiness-patch, Petr proposed to introduce a

   klp_set_patch_sticky(obj)

by which a live patch can control the exact point (before register,
pre-patch or post-patch) from which on it becomes unsafe to revert a
live patch.

I may code this up on top of the RFC to play around with how it works. Seems like it could be useful.

Perhaps it would make sense to make stickiness a klp_feature property?

I.e. klp_set_feature_stick(feat) which would
- prevent future patch disabling
- and transitions back to live patches not having that feature or only
   in a lower version

?

Same "sticky" thoughts as mentioned above... though I could see this as a requirement for a livepatch world in which we forgo unpatching altogether -- basically nailing down a feature forever.

- Wacky idea: can features be deprecated?  A livepatch neuters the
unsafe condition, so future patches can now safely ignore it.


What would be the usage scenario? That the live patch author suddenly
discovers a brilliant idea to make everything safe, but only after
he/she has released some unsafe implementation already?

Hah, yes that is one possibility. I have few such eureka moments, so I'm not suggesting I would be crafting any :) I suppose one case might be if the original feature wasn't unsafe after all. Or a module/livepatch/operator updates global state so that kernel control flow avoids problematic code (so that a livepatch is no longer required). I don't have a requirement for this, hence it was a wacky what-if.

Or are you thinking more of releasing a dummy live patch for turning the
system back into the original, "safe" state?

That would certainly help testing.


Sorry for all the thinking out loud, but I figured I would share my
frame of mind after coming out of the version/sticky RFC code.

This discussion is actually quite fruitful, imo.

Let me throw in a half-baked idea/dream of mine completely unrelated to
changing global semantics, safe upgrade paths and so on.

I'll try to get back to this part next week, unfortunately (or fortunately) the weekend is fast approaching. Perhaps it would be easier to jump back in later in the thread as I see you and Petr have expanded on the topic.

-- Joe



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux Kernel]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux