There is the following note in Documentation/kobject/txt: One important point cannot be overstated: every kobject must have a release() method, and the kobject must persist (in a consistent state) until that method is called. If these constraints are not met, the code is flawed. The release() method is called when the reference count reaches zero. It typically happens when kobject_put() is called. But it might be delayed when the related sysfs file is opened for reading or writing. To find such bugs, the release() method is delayed using a delayed work when CONFIG_DEBUG_KOBJECT_RELEASE is defined. Livepatch is on the safe side but only because the patch/module could not be removed at the moment. A patchset improving the consistency model is flying around and it would allow to remove unused patches eventually. I have simulated the situation when the modules can be removed and enabled the following config options: CONFIG_DEBUG_OBJECTS=y CONFIG_DEBUG_OBJECTS_FREE=y CONFIG_DEBUG_OBJECTS_TIMERS=y CONFIG_DEBUG_OBJECTS_WORK=y CONFIG_DEBUG_OBJECTS_RCU_HEAD=y CONFIG_DEBUG_OBJECTS_PERCPU_COUNTER=y CONFIG_DEBUG_OBJECTS_ENABLE_DEFAULT=1 CONFIG_DEBUG_KOBJECT=y CONFIG_DEBUG_KOBJECT_RELEASE=y Then I got the following crash: dhcp75 login: BUG: unable to handle kernel paging request at ffffffffa0002048 IP: [<ffffffff812aa868>] sysfs_file_ops+0x28/0x60 PGD 1e07067 PUD 1e08063 PMD 139c9a067 PTE 0 Oops: 0000 [#1] SMP Modules linked in: [last unloaded: livepatch_sample] CPU: 1 PID: 5667 Comm: bash Tainted: G W E K 4.6.0-rc7-next-20160510-4-default+ #231 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 task: ffff8800ba24c740 ti: ffff880138d04000 task.ti: ffff880138d04000 RIP: 0010:[<ffffffff812aa868>] [<ffffffff812aa868>] sysfs_file_ops+0x28/0x60 RSP: 0018:ffff880138d07dd8 EFLAGS: 00010202 RAX: 0000000000000001 RBX: ffffffffa0002020 RCX: 0000000000000000 RDX: 0000000000000001 RSI: ffff880035a51430 RDI: 0000000000000246 RBP: ffff880138d07de0 R08: 0000000000000001 R09: 0000000000000000 R10: ffff8800ba24c740 R11: 0000000000000001 R12: 0000000000000002 R13: ffff880139e617c0 R14: ffff880035813a18 R15: ffff880138d07f20 FS: 00007f13a3927700(0000) GS:ffff88013fc80000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: ffffffffa0002048 CR3: 0000000139961000 CR4: 00000000000006e0 Stack: ffff880035813a00 ffff880138d07e08 ffffffff812aa8bf ffff880035813a00 ffff880139e617c0 0000000000000002 ffff880138d07e48 ffffffff812a9e84 0000000000000001 ffff8800b8a13540 ffff880138d07f20 00007f13a392c000 Call Trace: [<ffffffff812aa8bf>] sysfs_kf_write+0x1f/0x60 [<ffffffff812a9e84>] kernfs_fop_write+0x144/0x1e0 [<ffffffff81222828>] __vfs_write+0x28/0x120 [<ffffffff810c61b9>] ? percpu_down_read+0x49/0x80 [<ffffffff812261c1>] ? __sb_start_write+0xd1/0xf0 [<ffffffff812261c1>] ? __sb_start_write+0xd1/0xf0 [<ffffffff81222f22>] vfs_write+0xb2/0x1b0 [<ffffffff81244176>] ? __fget_light+0x66/0x90 [<ffffffff81224279>] SyS_write+0x49/0xa0 [<ffffffff819582fc>] entry_SYSCALL_64_fastpath+0x1f/0xbd Code: 44 00 00 0f 1f 44 00 00 55 48 89 e5 53 f6 87 89 00 00 00 01 48 8b 47 28 48 8b 98 80 00 00 00 74 0a 8b 05 7c 6d c7 00 85 c0 75 10 <48> 8b 43 28 48 85 c0 74 27 48 8b 40 08 5b 5d c3 48 83 c7 08 e8 RIP [<ffffffff812aa868>] sysfs_file_ops+0x28/0x60 RSP <ffff880138d07dd8> CR2: ffffffffa0002048 The problem was that struct klp_patch was defined statically in the livepatch module. And the module was removed before the kobject release() method was called. For a short time, we were able to open /sys/kernel/livepatch/livepatch_sample/enabled for writing while the related kobject was already gone. There are two basic approaches how to get out of this trap. Either we need to allocate all the structures dynamically so that they survive the module removal. Or we need to block the module removal until the sysfs entries are released[*]. This patch is an attempt of the first approach. All three structures, klp_patch, klp_object, and klp_func, are dynamically allocated because they all contains kobject and related sysfs entries. The question was how to define the user-provided data an easy way. One approach was to keep the static definition using reduced structures that would later be copied to dynamically allocated structures. It is definitely worth consideration[**]. This patch tries another approach. It adds one more level of functions that are called before the patch is registered. Then patch registration is a clear boundary that marks the patch as complete and ready for enabling. New objects or functions could not be added to the patch from this point on. The three new functions are klp_create_empty_patch(), klp_add_object(), and klp_add_func(). They initialize most of the struct members and contains the related code from the former klp_init_*() functions. The information about loaded objects is still detected during the patch registration and from the module callbacks. Also the sysfs entries are still created during the patch registration. Note that we should not create sysfs entries before the patch is complete to make the handling easier. Anyway, these parts of klp_init_*() functions are moved to klp_registration_*() functions to make it clear in which phase they happen. The patch avoids hardcoding the size of the arrays by using linked lists. Then the standard list_for_each_entry() macro could be used to iterate the list of objects and functions. Also there are helper macros that help with error handling and make the patch definition easier to do and review. The last thing is a patch removal. It can be done only when the patch is disabled. And it is done in one step. To make it symmetric, we need to remove the sysfs entries right after the patch is removed from the global list. The sysfs entries are handled via the kobjects. Therefore the kobjects should be removed at the same time together with the related structures. All the klp_free*() function are renamed to klp_release_*() functions to reduce a potential confusion. Note that the structures are freed by the klp_kobj_release_*() functions that might be called later. [*] We might block removing the module using a completion() that would wait until the top-level kobject release method is called. This approach is used by the module framework itself, see mod_kobject_put() and module_kobj_release(). It even seems to be safe. kobject_cleanup() do not longer access any data from the kobject once the release method is called. And it is a way how to make sure that the structure has valid data until the release method is called. [**] I deemed ugly to define the patch using reduced structures and just copy them from static defined arrays to a dynamically defined arrays. But it would keep the definition reasonable without hardcoded array size and other tricks. Also it would help to pass all data in call and avoid the three levels of functions. It will be possible to do this during the patch registration and avoid the three levels of functions: create+add/register/enable. Also it will avoid the asymmetry between the patch creation and release. Huh, I send this patch because it is working and to show how this approach looks like. I would personally would prefer using the completion. I think that it is not a hack after all. Signed-off-by: Petr Mladek <pmladek@xxxxxxxx> --- include/linux/livepatch.h | 70 +++++-- kernel/livepatch/core.c | 355 +++++++++++++++++++++++------------ samples/livepatch/livepatch-sample.c | 72 +++---- 3 files changed, 320 insertions(+), 177 deletions(-) diff --git a/include/linux/livepatch.h b/include/linux/livepatch.h index a93a0b23dc8d..2867409eb439 100644 --- a/include/linux/livepatch.h +++ b/include/linux/livepatch.h @@ -40,6 +40,7 @@ enum klp_state { * @old_sympos: a hint indicating which symbol position the old function * can be found (optional) * @old_addr: the address of the function being patched + * @list: list node for the list of patched functions in an object * @kobj: kobject for sysfs resources * @state: tracks function-level patch application state * @stack_node: list node for klp_ops func_stack list @@ -59,15 +60,17 @@ struct klp_func { /* internal */ unsigned long old_addr; + struct list_head list; + struct list_head stack_node; struct kobject kobj; enum klp_state state; - struct list_head stack_node; }; /** * struct klp_object - kernel object structure for live patching * @name: module name (or NULL for vmlinux) * @funcs: function entries for functions to be patched in the object + * @list: list node for the list of patched objects * @kobj: kobject for sysfs resources * @mod: kernel module associated with the patched object * (NULL for vmlinux) @@ -76,9 +79,10 @@ struct klp_func { struct klp_object { /* external */ const char *name; - struct klp_func *funcs; /* internal */ + struct list_head funcs; + struct list_head list; struct kobject kobj; struct module *mod; enum klp_state state; @@ -95,24 +99,24 @@ struct klp_object { struct klp_patch { /* external */ struct module *mod; - struct klp_object *objs; /* internal */ + struct list_head objs; struct list_head list; struct kobject kobj; enum klp_state state; }; -#define klp_for_each_object(patch, obj) \ - for (obj = patch->objs; obj->funcs || obj->name; obj++) - -#define klp_for_each_func(obj, func) \ - for (func = obj->funcs; \ - func->old_name || func->new_func || func->old_sympos; \ - func++) +struct klp_patch *klp_create_empty_patch(struct module *mod); +struct klp_object *klp_add_object(struct klp_patch *patch, + const char *name); +struct klp_func *klp_add_func(struct klp_object *obj, + const char *old_name, + void *new_func, + unsigned long old_sympos); int klp_register_patch(struct klp_patch *); -int klp_unregister_patch(struct klp_patch *); +int klp_release_patch(struct klp_patch *); int klp_enable_patch(struct klp_patch *); int klp_disable_patch(struct klp_patch *); @@ -120,6 +124,50 @@ int klp_disable_patch(struct klp_patch *); int klp_module_coming(struct module *mod); void klp_module_going(struct module *mod); +#define klp_create_patch_or_die(mod) \ +({ \ + struct klp_patch *__patch; \ + \ + __patch = klp_create_empty_patch(mod); \ + if (IS_ERR(__patch)) { \ + pr_err("livepatch: failed to create empty patch (%ld)\n", \ + PTR_ERR(__patch)); \ + return PTR_ERR(__patch); \ + } \ + __patch; \ +}) + +#define klp_add_object_or_die(patch, obj_name) \ +({ \ + struct klp_object *__obj; \ + \ + __obj = klp_add_object(patch, obj_name); \ + if (IS_ERR(__obj)) { \ + pr_err("livepatch: failed to add the object '%s' for the patch '%s' (%ld)\n", \ + obj_name ? obj_name : "vmlinux", \ + patch->mod->name, PTR_ERR(__obj)); \ + WARN_ON(klp_release_patch(patch)); \ + return PTR_ERR(__obj); \ + } \ + __obj; \ +}) + +#define klp_add_func_or_die(patch, obj, old_name, new_func, sympos) \ +({ \ + struct klp_func *__func; \ + \ + __func = klp_add_func(obj, old_name, new_func, sympos); \ + if (IS_ERR(__func)) { \ + pr_err("livepatch: failed to add the function '%s' for the object '%s' in the patch '%s' (%ld)\n", \ + old_name ? old_name : "NULL", \ + obj->name ? obj->name : "vmlinux", \ + patch->mod->name, PTR_ERR(__func)); \ + WARN_ON(klp_release_patch(patch)); \ + return PTR_ERR(__func); \ + } \ + __func; \ +}) + #else /* !CONFIG_LIVEPATCH */ static inline int klp_module_coming(struct module *mod) { return 0; } diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c index 5c2bc1052691..520be7dcd73a 100644 --- a/kernel/livepatch/core.c +++ b/kernel/livepatch/core.c @@ -451,7 +451,7 @@ static void klp_disable_object(struct klp_object *obj) { struct klp_func *func; - klp_for_each_func(obj, func) + list_for_each_entry(func, &obj->funcs, list) if (func->state == KLP_ENABLED) klp_disable_func(func); @@ -469,7 +469,7 @@ static int klp_enable_object(struct klp_object *obj) if (WARN_ON(!klp_is_object_loaded(obj))) return -EINVAL; - klp_for_each_func(obj, func) { + list_for_each_entry(func, &obj->funcs, list) { ret = klp_enable_func(func); if (ret) { klp_disable_object(obj); @@ -492,7 +492,7 @@ static int __klp_disable_patch(struct klp_patch *patch) pr_notice("disabling patch '%s'\n", patch->mod->name); - klp_for_each_object(patch, obj) { + list_for_each_entry(obj, &patch->objs, list) { if (obj->state == KLP_ENABLED) klp_disable_object(obj); } @@ -552,7 +552,7 @@ static int __klp_enable_patch(struct klp_patch *patch) pr_notice("enabling patch '%s'\n", patch->mod->name); - klp_for_each_object(patch, obj) { + list_for_each_entry(obj, &patch->objs, list) { if (!klp_is_object_loaded(obj)) continue; @@ -668,10 +668,13 @@ static struct attribute *klp_patch_attrs[] = { static void klp_kobj_release_patch(struct kobject *kobj) { + struct klp_patch *patch = container_of(kobj, struct klp_patch, kobj); + /* * Once we have a consistency model we'll need to module_put() the * patch module here. See klp_register_patch() for more details. */ + kfree(patch); } static struct kobj_type klp_ktype_patch = { @@ -682,6 +685,10 @@ static struct kobj_type klp_ktype_patch = { static void klp_kobj_release_object(struct kobject *kobj) { + struct klp_object *obj = container_of(kobj, struct klp_object, kobj); + + kfree(obj->name); + kfree(obj); } static struct kobj_type klp_ktype_object = { @@ -691,6 +698,10 @@ static struct kobj_type klp_ktype_object = { static void klp_kobj_release_func(struct kobject *kobj) { + struct klp_func *func = container_of(kobj, struct klp_func, kobj); + + kfree(func->old_name); + kfree(func); } static struct kobj_type klp_ktype_func = { @@ -699,60 +710,89 @@ static struct kobj_type klp_ktype_func = { }; /* - * Free all functions' kobjects in the array up to some limit. When limit is - * NULL, all kobjects are freed. + * Free all klp_func structures listed for the given object. It is called + * also when the patch creation or registration fails and some kobjects are + * not initialized. For these, the release function must be called directly. */ -static void klp_free_funcs_limited(struct klp_object *obj, - struct klp_func *limit) +static void klp_release_funcs(struct klp_object *obj) { - struct klp_func *func; - - for (func = obj->funcs; func->old_name && func != limit; func++) - kobject_put(&func->kobj); + struct klp_func *func, *tmp; + + list_for_each_entry_safe(func, tmp, &obj->funcs, list) { + list_del(&func->list); + if (func->kobj.state_initialized) + kobject_put(&func->kobj); + else + klp_kobj_release_func(&func->kobj); + } } /* Clean up when a patched object is unloaded */ -static void klp_free_object_loaded(struct klp_object *obj) +static void klp_unregister_object_loaded(struct klp_object *obj) { struct klp_func *func; obj->mod = NULL; - klp_for_each_func(obj, func) + list_for_each_entry(func, &obj->funcs, list) func->old_addr = 0; } /* - * Free all objects' kobjects in the array up to some limit. When limit is - * NULL, all kobjects are freed. + * Free all klp_object structures listed for the given patch. It is called + * also when the patch creation or registration fails and some kobjects are + * not initialized. For these, the release function must be called directly. */ -static void klp_free_objects_limited(struct klp_patch *patch, - struct klp_object *limit) +static void klp_release_objects(struct klp_patch *patch) { - struct klp_object *obj; - - for (obj = patch->objs; obj->funcs && obj != limit; obj++) { - klp_free_funcs_limited(obj, NULL); - kobject_put(&obj->kobj); + struct klp_object *obj, *tmp; + + list_for_each_entry_safe(obj, tmp, &patch->objs, list) { + klp_release_funcs(obj); + list_del(&obj->list); + if (obj->kobj.state_initialized) + kobject_put(&obj->kobj); + else + klp_kobj_release_object(&obj->kobj); } } -static void klp_free_patch(struct klp_patch *patch) +/** + * klp_release_patch() - unregisters a patch and frees all structures + * @patch: Disabled patch to be released + * + * Removes the patch from the global list, removes the sysfs interface + * and frees all the data structures for the patch, objects, and functions. + * + * Return: 0 on success, otherwise error + */ +int klp_release_patch(struct klp_patch *patch) { - klp_free_objects_limited(patch, NULL); + int ret = 0; + + mutex_lock(&klp_mutex); + + if (patch->state == KLP_ENABLED) { + ret = -EBUSY; + goto err; + } + + klp_release_objects(patch); if (!list_empty(&patch->list)) list_del(&patch->list); - kobject_put(&patch->kobj); + if (patch->kobj.state_initialized) + kobject_put(&patch->kobj); + else + klp_kobj_release_patch(&patch->kobj); + +err: + mutex_unlock(&klp_mutex); + return ret; } +EXPORT_SYMBOL_GPL(klp_release_patch); -static int klp_init_func(struct klp_object *obj, struct klp_func *func) +static int klp_register_func(struct klp_object *obj, struct klp_func *func) { - if (!func->old_name || !func->new_func) - return -EINVAL; - - INIT_LIST_HEAD(&func->stack_node); - func->state = KLP_DISABLED; - /* The format for the sysfs directory is <function,sympos> where sympos * is the nth occurrence of this symbol in kallsyms for the patched * object. If the user selects 0 for old_sympos, then 1 will be used @@ -764,7 +804,7 @@ static int klp_init_func(struct klp_object *obj, struct klp_func *func) } /* parts of the initialization that is done only when the object is loaded */ -static int klp_init_object_loaded(struct klp_patch *patch, +static int klp_register_object_loaded(struct klp_patch *patch, struct klp_object *obj) { struct klp_func *func; @@ -774,7 +814,7 @@ static int klp_init_object_loaded(struct klp_patch *patch, if (ret) return ret; - klp_for_each_func(obj, func) { + list_for_each_entry(func, &obj->funcs, list) { ret = klp_find_object_symbol(obj->name, func->old_name, func->old_sympos, &func->old_addr); @@ -785,18 +825,12 @@ static int klp_init_object_loaded(struct klp_patch *patch, return 0; } -static int klp_init_object(struct klp_patch *patch, struct klp_object *obj) +static int klp_register_object(struct klp_patch *patch, struct klp_object *obj) { struct klp_func *func; int ret; const char *name; - if (!obj->funcs) - return -EINVAL; - - obj->state = KLP_DISABLED; - obj->mod = NULL; - klp_find_object_module(obj); name = klp_is_module(obj) ? obj->name : "vmlinux"; @@ -805,137 +839,214 @@ static int klp_init_object(struct klp_patch *patch, struct klp_object *obj) if (ret) return ret; - klp_for_each_func(obj, func) { - ret = klp_init_func(obj, func); - if (ret) - goto free; - } - - if (klp_is_object_loaded(obj)) { - ret = klp_init_object_loaded(patch, obj); + list_for_each_entry(func, &obj->funcs, list) { + ret = klp_register_func(obj, func); if (ret) - goto free; + return ret; } - return 0; + if (klp_is_object_loaded(obj)) + ret = klp_register_object_loaded(patch, obj); -free: - klp_free_funcs_limited(obj, func); - kobject_put(&obj->kobj); return ret; } -static int klp_init_patch(struct klp_patch *patch) +/** + * klp_register_patch() - registers a patch + * @patch: Patch to be registered + * + * Creates sysfs interface for the given patch, detects missing + * information for loaded objects, links the patch to the global list. + * + * Never add new objects of functions once the patch gets registered. + * These operations are not safe wrt coming or leaving modules and + * also wrt enabling or disabling the patch. + * + * Return: 0 on success, otherwise error + */ +int klp_register_patch(struct klp_patch *patch) { struct klp_object *obj; int ret; - if (!patch->objs) - return -EINVAL; + if (!klp_initialized()) + return -ENODEV; + + /* + * A reference is taken on the patch module to prevent it from being + * unloaded. Right now, we don't allow patch modules to unload since + * there is currently no method to determine if a thread is still + * running in the patched code contained in the patch module once + * the ftrace registration is successful. + */ + if (!try_module_get(patch->mod)) + return -ENODEV; mutex_lock(&klp_mutex); - patch->state = KLP_DISABLED; + if (klp_is_patch_registered(patch)) { + ret = -EINVAL; + goto err; + } ret = kobject_init_and_add(&patch->kobj, &klp_ktype_patch, klp_root_kobj, "%s", patch->mod->name); if (ret) - goto unlock; + goto err; - klp_for_each_object(patch, obj) { - ret = klp_init_object(patch, obj); + list_for_each_entry(obj, &patch->objs, list) { + ret = klp_register_object(patch, obj); if (ret) - goto free; + goto err; } list_add_tail(&patch->list, &klp_patches); +err: mutex_unlock(&klp_mutex); - return 0; - -free: - klp_free_objects_limited(patch, obj); - kobject_put(&patch->kobj); -unlock: - mutex_unlock(&klp_mutex); return ret; } +EXPORT_SYMBOL_GPL(klp_register_patch); /** - * klp_unregister_patch() - unregisters a patch - * @patch: Disabled patch to be unregistered + * klp_add_func() - allocate and initialize struct klp_func, link it into + * the given object structure + * @obj: object structure where the patched function belongs to + * @old_name: name of the function to be patched + * @new_func: pointer to the new function code + * @old_sympos: a hint indicating which symbol position the old function + * can be found * - * Frees the data structures and removes the sysfs interface. + * Allocates and initializes struct klp_func. Then it links the structure + * into the given object structure. * - * Return: 0 on success, otherwise error + * The structure must be freed only using klp_release_patch() called for + * the related patch structure! + * + * Never add new functions once the patch is registered! You would risk + * an inconsistent state wrt coming or leaving modules and also wrt + * enabling or disabling the patch. + * + * Return: valid pointer on success, ERR_PTR otherwise. */ -int klp_unregister_patch(struct klp_patch *patch) +struct klp_func *klp_add_func(struct klp_object *obj, const char *old_name, + void *new_func, unsigned long old_sympos) { - int ret = 0; + struct klp_func *func; - mutex_lock(&klp_mutex); + if (!obj || !old_name || !new_func || obj->state == KLP_ENABLED) + return ERR_PTR(-EINVAL); - if (!klp_is_patch_registered(patch)) { - ret = -EINVAL; - goto out; + func = kzalloc(sizeof(*func), GFP_KERNEL); + if (!func) + return ERR_PTR(-ENOMEM); + + func->old_name = kstrdup(old_name, GFP_KERNEL); + if (!func->old_name) { + kfree(func); + return ERR_PTR(-ENOMEM); } - if (patch->state == KLP_ENABLED) { - ret = -EBUSY; - goto out; + func->new_func = new_func; + func->old_sympos = old_sympos; + INIT_LIST_HEAD(&func->list); + INIT_LIST_HEAD(&func->stack_node); + func->state = KLP_DISABLED; + + list_add(&func->list, &obj->funcs); + + return func; +} +EXPORT_SYMBOL_GPL(klp_add_func); + +/** + * klp_add_object() - allocate and initialize struct klp_object, link it into + * to the given patch + * &patch patch structure that will modify the given object + * @name: name of the patched object, it is a name of a kernel module + * or NULL for vmlinux + * + * Allocates and initializes struct klp_object. Links the structure + * into the given patch structure. + * + * The structure must be freed only using klp_release_patch() called for + * the related patch structure! + * + * Never add new objects once the patch is registered! You would risk + * an inconsistent state wrt coming or leaving modules and also wrt + * enabling or disabling the patch. + * + * Return: valid pointer on success, ERR_PTR otherwise. + */ +struct klp_object *klp_add_object(struct klp_patch *patch, const char *name) +{ + struct klp_object *obj; + + if (!patch || !list_empty(&patch->list)) + return ERR_PTR(-EINVAL); + + obj = kzalloc(sizeof(*obj), GFP_KERNEL); + if (!obj) + return ERR_PTR(-ENOMEM); + + if (name) { + obj->name = kstrdup(name, GFP_KERNEL); + if (!obj->name) { + kfree(obj); + return ERR_PTR(-ENOMEM); + } } - klp_free_patch(patch); + INIT_LIST_HEAD(&obj->funcs); + INIT_LIST_HEAD(&obj->list); + obj->state = KLP_DISABLED; + obj->mod = NULL; -out: - mutex_unlock(&klp_mutex); - return ret; + list_add(&obj->list, &patch->objs); + + return obj; } -EXPORT_SYMBOL_GPL(klp_unregister_patch); +EXPORT_SYMBOL_GPL(klp_add_object); /** - * klp_register_patch() - registers a patch - * @patch: Patch to be registered + * klp_create_empty_patch() - allocate and initialize struct klp_patch + * @mod: kernel module that provides the livepatch * - * Initializes the data structure associated with the patch and - * creates the sysfs interface. + * Allocates and initializes struct klp_patch. The links to the patched + * objects and functions can be added using klp_add_object() and + * klp_add_func(). * - * Return: 0 on success, otherwise error + * The structure must be freed only using klp_release_patch()! + * + * Return: valid pointer on success, ERR_PTR otherwise. */ -int klp_register_patch(struct klp_patch *patch) +struct klp_patch *klp_create_empty_patch(struct module *mod) { - int ret; + struct klp_patch *patch; - if (!patch || !patch->mod) - return -EINVAL; + if (!mod) + return ERR_PTR(-EINVAL); - if (!is_livepatch_module(patch->mod)) { - pr_err("module %s is not marked as a livepatch module", - patch->mod->name); - return -EINVAL; + if (!is_livepatch_module(mod)) { + pr_err("module '%s' is not marked as a livepatch module\n", + mod->name); + return ERR_PTR(-EINVAL); } - if (!klp_initialized()) - return -ENODEV; + patch = kzalloc(sizeof(*patch), GFP_KERNEL); + if (!patch) + return ERR_PTR(-ENOMEM); - /* - * A reference is taken on the patch module to prevent it from being - * unloaded. Right now, we don't allow patch modules to unload since - * there is currently no method to determine if a thread is still - * running in the patched code contained in the patch module once - * the ftrace registration is successful. - */ - if (!try_module_get(patch->mod)) - return -ENODEV; + INIT_LIST_HEAD(&patch->objs); + INIT_LIST_HEAD(&patch->list); + patch->state = KLP_DISABLED; + patch->mod = mod; - ret = klp_init_patch(patch); - if (ret) - module_put(patch->mod); + return patch; - return ret; } -EXPORT_SYMBOL_GPL(klp_register_patch); +EXPORT_SYMBOL_GPL(klp_create_empty_patch); int klp_module_coming(struct module *mod) { @@ -955,13 +1066,13 @@ int klp_module_coming(struct module *mod) mod->klp_alive = true; list_for_each_entry(patch, &klp_patches, list) { - klp_for_each_object(patch, obj) { + list_for_each_entry(obj, &patch->objs, list) { if (!klp_is_module(obj) || strcmp(obj->name, mod->name)) continue; obj->mod = mod; - ret = klp_init_object_loaded(patch, obj); + ret = klp_register_object_loaded(patch, obj); if (ret) { pr_warn("failed to initialize patch '%s' for module '%s' (%d)\n", patch->mod->name, obj->mod->name, ret); @@ -997,7 +1108,7 @@ err: pr_warn("patch '%s' failed for module '%s', refusing to load module '%s'\n", patch->mod->name, obj->mod->name, obj->mod->name); mod->klp_alive = false; - klp_free_object_loaded(obj); + klp_unregister_object_loaded(obj); mutex_unlock(&klp_mutex); return ret; @@ -1021,7 +1132,7 @@ void klp_module_going(struct module *mod) mod->klp_alive = false; list_for_each_entry(patch, &klp_patches, list) { - klp_for_each_object(patch, obj) { + list_for_each_entry(obj, &patch->objs, list) { if (!klp_is_module(obj) || strcmp(obj->name, mod->name)) continue; @@ -1031,7 +1142,7 @@ void klp_module_going(struct module *mod) klp_disable_object(obj); } - klp_free_object_loaded(obj); + klp_unregister_object_loaded(obj); break; } } diff --git a/samples/livepatch/livepatch-sample.c b/samples/livepatch/livepatch-sample.c index b9bec5f8c725..187e4eab6446 100644 --- a/samples/livepatch/livepatch-sample.c +++ b/samples/livepatch/livepatch-sample.c @@ -149,55 +149,39 @@ static ssize_t livepatch_b_show(struct kobject *kobj, return sprintf(buf, "%s: this has been livepatched\n", attr->attr.name); } -static struct klp_func funcs[] = { - { - .old_name = "cmdline_proc_show", - .new_func = livepatch_cmdline_proc_show, - }, { - .old_name = "uptime_proc_show", - .new_func = livepatch_uptime_proc_show, - }, { - .old_name = "show_console_dev", - .new_func = livepatch_show_console_dev, - }, { } -}; - -static struct klp_func kobject_example_funcs[] = { - { - .old_name = "foo_show", - .new_func = livepatch_foo_show, - }, { - .old_name = "b_show", - .new_func = livepatch_b_show, - }, { } -}; - - -static struct klp_object objs[] = { - { - /* name being NULL means vmlinux */ - .funcs = funcs, - }, { - .name = "kobject_example", - .funcs = kobject_example_funcs, - }, { } -}; - -static struct klp_patch patch = { - .mod = THIS_MODULE, - .objs = objs, -}; +static struct klp_patch *patch; static int livepatch_init(void) { + struct klp_object *obj; int ret; - ret = klp_register_patch(&patch); - if (ret) + /* create empty patch structure */ + patch = klp_create_patch_or_die(THIS_MODULE); + + /* add info about changes against vmlinux */ + obj = klp_add_object_or_die(patch, NULL); + klp_add_func_or_die(patch, obj, "cmdline_proc_show", + livepatch_cmdline_proc_show, 0); + klp_add_func_or_die(patch, obj, "uptime_proc_show", + livepatch_uptime_proc_show, 0); + klp_add_func_or_die(patch, obj, "show_console_dev", + livepatch_show_console_dev, 0); + + /* add info about changes against the module kobject_example */ + obj = klp_add_object_or_die(patch, "kobject_example"); + klp_add_func_or_die(patch, obj, "foo_show", livepatch_foo_show, 0); + klp_add_func_or_die(patch, obj, "b_show", livepatch_b_show, 0); + + ret = klp_register_patch(patch); + if (ret) { + WARN_ON(klp_release_patch(patch)); return ret; - ret = klp_enable_patch(&patch); + } + + ret = klp_enable_patch(patch); if (ret) { - WARN_ON(klp_unregister_patch(&patch)); + WARN_ON(klp_release_patch(patch)); return ret; } return 0; @@ -205,8 +189,8 @@ static int livepatch_init(void) static void livepatch_exit(void) { - WARN_ON(klp_disable_patch(&patch)); - WARN_ON(klp_unregister_patch(&patch)); + WARN_ON(klp_disable_patch(patch)); + WARN_ON(klp_release_patch(patch)); } module_init(livepatch_init); -- 1.8.5.6 -- To unsubscribe from this list: send the line "unsubscribe live-patching" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html