Re: [Resend PATCHv4 1/1] mm: fix incorrect vbq reference in purge_fragmented_block

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/13/24 at 01:28pm, Uladzislau Rezki wrote:
> On Thu, Jun 13, 2024 at 04:41:34PM +0800, Baoquan He wrote:
> > On 06/12/24 at 01:27pm, Uladzislau Rezki wrote:
> > > On Wed, Jun 12, 2024 at 10:00:14AM +0800, Zhaoyang Huang wrote:
> > > > On Wed, Jun 12, 2024 at 2:16 AM Uladzislau Rezki <urezki@xxxxxxxxx> wrote:
> > > > >
> > > > > >
> > > > > > Sorry to bother you again. Are there any other comments or new patch
> > > > > > on this which block some test cases of ANDROID that only accept ACKed
> > > > > > one on its tree.
> > > > > >
> > > > > I have just returned from vacation. Give me some time to review your
> > > > > patch. Meanwhile, do you have a reproducer? So i would like to see how
> > > > > i can trigger an issue that is in question.
> > > > This bug arises from an system wide android test which has been
> > > > reported by many vendors. Keep mount/unmount an erofs partition is
> > > > supposed to be a simple reproducer. IMO, the logic defect is obvious
> > > > enough to be found by code review.
> > > >
> > > Baoquan, any objection about this v4?
> > > 
> > > Your proposal about inserting a new vmap-block based on it belongs
> > > to, i.e. not per-this-cpu, should fix an issue. The problem is that
> > > such way does __not__ pre-load a current CPU what is not good.
> > 
> > With my understand, when we start handling to insert vb to vbq->xa and
> > vbq->free, the vmap_area allocation has been done, it doesn't impact the
> > CPU preloading when adding it into which CPU's vbq->free, does it? 
> > 
> > Not sure if I miss anything about the CPU preloading.
> > 
> Like explained below in this email-thread:
> 
> vb_alloc() inserts a new block _not_ on this CPU. This CPU tries to
> allocate one more time and its free_list is empty(because on a prev.
> step a block has been inserted into another CPU-block-queue), thus
> it allocates a new block one more time and which is inserted most
> likely on a next zone/CPU. And so on.

Thanks for detailed explanation, got it now.

It's a pity we can't unify the xa and the list into one vbq structure
based on one principal.

> 
> See:
> 
> <snip vb_alloc>
> ...
>         rcu_read_lock();
> 	vbq = raw_cpu_ptr(&vmap_block_queue); <- Here it is correctly accessing this CPU 
> 	list_for_each_entry_rcu(vb, &vbq->free, free_list) {
> 		unsigned long pages_off;
> ...
> <snip vb_alloc>
> 
> <snip new_vmap_block>
> ...
>        vbq = addr_to_vbq(va->va_start); <- Here we insert based on hashing, i.e. not to this CPU-block-queue
>        spin_lock(&vbq->lock);
>        list_add_tail_rcu(&vb->free_list, &vbq->free);
>        spin_unlock(&vbq->lock);
> ...
> <snip new_vmap_block>
> 
> Thanks!
> 
> --
> Uladzislau Rezki
> 





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux