RE: [PATCH v7 00/16] Intel IPU3 ImgU patchset

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Tomasz, Laurent,

> Subject: Re: [PATCH v7 00/16] Intel IPU3 ImgU patchset
> 
> Hello Raj,
> 
> On Mon, Jan 21, 2019 at 02:41:03PM +0900, Tomasz Figa wrote:
> >  On Wed, Jan 16, 2019 at 11:16 AM Mani, Rajmohan
> <rajmohan.mani@xxxxxxxxx> wrote:
> > >> Subject: Re: [PATCH v7 00/16] Intel IPU3 ImgU patchset On Saturday,
> > >> 12 January 2019 04:30:49 EET Mani, Rajmohan wrote:
> > >>
> > >> [snip]
> > >>
> > >>> I finally managed to reproduce the issue with 4.20-rc6, with KASAN
> > >>> enabled and with CONFIG_SLUB_DEBUG_ON with SLAB_STORE_USER.
> > >>
> > >> Nice ! Thank you for your work.
> > >>
> > >>> The following line indicates the crash happens when yavta PID
> > >>> 10289 tries to free the memory.
> > >>>
> > >>> [  452.437844] BUG: KASAN: use-after-free in
> > >>> ipu3_dmamap_free+0x50/0x9c [ipu3_imgu] [  452.446123] Read of size
> > >>> 8 at addr ffff8881503481a0 by task yavta/10289
> > >>>
> > >>> The above looks to be normal, since it's the same task that
> > >>> allocated this memory.
> > >>> [  452.685731] Allocated by task 10289:
> > >>>
> > >>> Before the above happened, yavta/10187 came in and freed this
> > >>> memory per KASAN.
> > >>> [  452.787656] Freed by task 10187:
> > >>>
> > >>> Is this (one instance of yavta freeing the memory allocated by
> > >>> another instance of yavta) expected? Or does it indicate that mmap
> > >>> giving the same address across these 2 instances of yavta? I need
> > >>> to debug / confirm the latter case.
> > >>
> > >> KASAN prints the task name (and process ID) to help you debugging
> > >> the problem, but this doesn't mean that yavta is freeing the
> > >> memory. yavta exercises the V4L2 API exposed by the driver, and
> > >> internally, down the call stack, ipu3_dmamap_free() is called by
> > >> the driver. According to the backtraces you posted, this is in
> > >> response to a VIDIOC_STREAMOFF call from yavta. I would expect
> > >> VIDIOC_STREAMOFF to free DMA mappings created for the buffers on
> > >> the corresponding video nodes, and thus allocated by the same task.
> > >
> > > Ack.
> > >
> > >> The fact
> > >> that memory is allocated in one task and freed in another seems
> > >> weird to me in this case.
> > >>
> > >
> > > I have instrumented the code around ipu3 dma map code, with a change
> > > to skip dma free operations, if the current->pid is not the same as
> > > the pid that originally did the dma alloc.
> > >
> > > There are no crashes in this case, as expected.
> > >
> > > I also confirmed that STREAM_ON/OFF is the one that results in this crash.
> > > I need to spend more time on the alloc / free operations done by the
> > > yavta Instances to see where the problem could be.
> > >
> > > This below line doesn't make sense, as the free call for pid 12986
> > > occurs first, before the alloc calls. Yavta application logs
> > > indicate the dma alloc has been done for pid 12986, although I don't see
> corresponding dma alloc calls from pid 12986.
> >
> >  I wonder if that doesn't mean that for some reason some V4L2 ioctls
> > done from a context other than the owner (the one that first allocated
> >  vb2 buffers) end up triggering some buffer freeing/re-allocation. For
> >  VB2 buffers that's normally prevented by the core, but possibly we do
> > some internal buffer management in non-buffer related V4L2 ioctls in
> > the driver?
> 
> I had a quick look at the driver, and found the following code in the
> VIDIOC_STREAMOFF handler ipu3_vb2_stop_streaming():
> 
>         /* Was this the first node with streaming disabled? */
>         if (imgu->streaming && ipu3_all_nodes_streaming(imgu, node)) {
>                 /* Yes, really stop streaming now */
>                 dev_dbg(dev, "IMGU streaming is ready to stop");
>                 r = imgu_s_stream(imgu, false);
>                 if (!r)
>                         imgu->streaming = false;
>         }
> 
> The queue is initialized in ipu3_v4l2_node_setup() with
> 
>         vbq->lock = &node->lock;
> 
> which means that concurrent VIDIOC_STREAMOFF operations on different
> nodes can race each other. Could you enable dynamic debugging to get the
> "IMGU streaming is ready to stop" message printed to the kernel log, and see if
> this could explain the double-free problem ?
> 
> In any case this race condition should be handled by proper locking.
> Both the imgu->streaming and the ipu3_all_nodes_streaming() tests are very
> racy, and can lead to many different problems (failure at processing start also
> comes to mind).
> 

Thanks for your inputs.

I have confirmed so far that ipu3_vb2_stop_streaming() is called for all 4 instances
of yavta exit, while ipu3_vb2_start_streaming() is called for just one instance. This causes
4 free operations for a single alloc, resulting in this failure.

ipu3_vb2_stop_streaming() should be done just once, just like ipu3_vb2_start_streaming().
ipu3_vb2_stop_streaming() should also be improved a bit to handle multiple applications handling
all the nodes.

Laurent's findings point to the same.

Let me get back soon with more details.

[snip]




[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux