Hi all,
I looked through the vulnerability some more and came up with this so far. Please let me know what you think
Consider the following interleaving.
- In __ksm_enter, thread 1 inserts the new mm into the list of mms at a position determined by ksm_run being set to KSM_RUN_MERGE (see here)
- In run_store, thread 2 changes ksm_run to KSM_RUN_UNMERGE here and executes unmerge_and_remove_all_rmap_items, where it can free the newly added mm via mmdrop here.
- In __ksm_enter, thread 2 continues execution and updates the fields of the new mm (see here), although it was freed, resulting in a use-after-free vulnerability.
Thanks!
On Thu, Aug 11, 2022 at 7:00 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
On Tue, 2 Aug 2022 23:15:50 +0800 Kefeng Wang <wangkefeng.wang@xxxxxxxxxx> wrote:
> Abhishek reported a data-race issue,
OK, but it would be better to perform an analysis of the alleged bug,
describe the potential effects if the race is hit, etc.
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -2507,6 +2507,7 @@ int __ksm_enter(struct mm_struct *mm)
> {
> struct mm_slot *mm_slot;
> int needs_wakeup;
> + bool ksm_run_unmerge;
>
> mm_slot = alloc_mm_slot();
> if (!mm_slot)
> @@ -2515,6 +2516,10 @@ int __ksm_enter(struct mm_struct *mm)
> /* Check ksm_run too? Would need tighter locking */
> needs_wakeup = list_empty(&ksm_mm_head.mm_list);
>
> + mutex_lock(&ksm_thread_mutex);
> + ksm_run_unmerge = !!(ksm_run & KSM_RUN_UNMERGE);
> + mutex_unlock(&ksm_thread_mutex);
>
> spin_lock(&ksm_mmlist_lock);
> insert_to_mm_slots_hash(mm, mm_slot);
> /*
> @@ -2527,7 +2532,7 @@ int __ksm_enter(struct mm_struct *mm)
> * scanning cursor, otherwise KSM pages in newly forked mms will be
> * missed: then we might as well insert at the end of the list.
> */
> - if (ksm_run & KSM_RUN_UNMERGE)
> + if (ksm_run_unmerge)
run_store() can alter ksm_run right here, so __ksm_enter() is still
acting on the old setting?
> list_add_tail(&mm_slot->mm_list, &ksm_mm_head.mm_list);
> else
> list_add_tail(&mm_slot->mm_list, &ksm_scan.mm_slot->mm_list);