Re: [patch 1/6] mm/vmalloc: Prevent stale TLBs in fully utilized blocks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 24 2023 at 19:24, Baoquan He wrote:
> On 05/24/23 at 11:51am, Thomas Gleixner wrote:
>    vb_free(Y)
>      vb->dirty += order;
>      if (vb->dirty == VMAP_BBMAP_BITS) // Condition is _false_
>         free_vmap_block(); 
>         -->free_vmap_area_noflush()
>            -->merge_or_add_vmap_area(va,
>                 &purge_vmap_area_root, &purge_vmap_area_list);

This is irrelevant. The path is _NOT_ taken. You even copied the
comment:

       if (vb->dirty == VMAP_BBMAP_BITS) // Condition is _false_

Did you actually read what I wrote?

Again: It _CANNOT_ be on the purge list because it has active mappings:

1  X = vb_alloc()
   ...  
   Y = vb_alloc()
     vb->free -= order;               // Free space goes to 0
     if (!vb->vb_free)
2      list_del(vb->free_list);       // Block is removed from free list
   ...
   vb_free(Y)
     vb->dirty += order;
3    if (vb->dirty == VMAP_BBMAP_BITS) // Condition is _false_
                                       // because #1 $X is still mapped
                                       // so block is _NOT_ freed and
                                       // _NOT_ put on the purge list

4   unmap_aliases()
     walk_free_list()           // Does not find it because of #2
     walk_purge_list()          // Does not find it because of #3

If the resulting flush range is not covering the $Y TLBs then stale TLBs
stay around.

The xarray walk finds it and guarantees that the TLBs are gone when
unmap_aliases() returns, which is the whole purpose of that function.

Thanks,

        tglx




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux