Re: [patch 1/6] mm/vmalloc: Prevent stale TLBs in fully utilized blocks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 24 2023 at 17:32, Baoquan He wrote:
> On 05/23/23 at 04:02pm, Thomas Gleixner wrote:
>> @@ -2236,9 +2236,10 @@ static void _vm_unmap_aliases(unsigned l
>>  	for_each_possible_cpu(cpu) {
>>  		struct vmap_block_queue *vbq = &per_cpu(vmap_block_queue, cpu);
>>  		struct vmap_block *vb;
>> +		unsigned long idx;
>>  
>>  		rcu_read_lock();
>
> Do we need to remove this rcu_read_xx() pair since it marks the RCU
> read-side critical section on vbq-free list?

And what protects the xarray lookup?

>> -		list_for_each_entry_rcu(vb, &vbq->free, free_list) {
>> +		xa_for_each(&vbq->vmap_blocks, idx, vb) {
>>  			spin_lock(&vb->lock);
>>  			if (vb->dirty && vb->dirty != VMAP_BBMAP_BITS) {
>>  				unsigned long va_start = vb->va->va_start;
>> 




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux