On Tue, May 22, 2018 at 11:50:29AM +0800, Jason Wang wrote: > > > On 2018年05月21日 22:42, Michael S. Tsirkin wrote: > > On Mon, May 21, 2018 at 10:38:10AM +0800, Jason Wang wrote: > > > On 2018年05月18日 17:24, Jason Wang wrote: > > > > On 2018年05月17日 21:45, DaeRyong Jeong wrote: > > > > > We report the crash: KASAN: use-after-free Read in vhost_chr_write_iter > > > > > > > > > > This crash has been found in v4.17-rc1 using RaceFuzzer (a modified > > > > > version of Syzkaller), which we describe more at the end of this > > > > > report. Our analysis shows that the race occurs when invoking two > > > > > syscalls concurrently, write$vnet and ioctl$VHOST_RESET_OWNER. > > > > > > > > > > > > > > > Analysis: > > > > > We think the concurrent execution of vhost_process_iotlb_msg() and > > > > > vhost_dev_cleanup() causes the crash. > > > > > Both of functions can run concurrently (please see call sequence below), > > > > > and possibly, there is a race on dev->iotlb. > > > > > If the switch occurs right after vhost_dev_cleanup() frees > > > > > dev->iotlb, vhost_process_iotlb_msg() still sees the non-null value > > > > > and it > > > > > keep executing without returning -EFAULT. Consequently, use-after-free > > > > > occures > > > > > > > > > > > > > > > Thread interleaving: > > > > > CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup) > > > > > (In the case of both VHOST_IOTLB_UPDATE and > > > > > VHOST_IOTLB_INVALIDATE) > > > > > ===== ===== > > > > > vhost_umem_clean(dev->iotlb); > > > > > if (!dev->iotlb) { > > > > > ret = -EFAULT; > > > > > break; > > > > > } > > > > > dev->iotlb = NULL; > > > > > > > > > > > > > > > Call Sequence: > > > > > CPU0 > > > > > ===== > > > > > vhost_net_chr_write_iter > > > > > vhost_chr_write_iter > > > > > vhost_process_iotlb_msg > > > > > > > > > > CPU1 > > > > > ===== > > > > > vhost_net_ioctl > > > > > vhost_net_reset_owner > > > > > vhost_dev_reset_owner > > > > > vhost_dev_cleanup > > > > Thanks a lot for the analysis. > > > > > > > > This could be addressed by simply protect it with dev mutex. > > > > > > > > Will post a patch. > > > > > > > Could you please help to test the attached patch? I've done some smoking > > > test. > > > > > > Thanks > > > >From 88328386f3f652e684ee33dc4cf63dcaed871aea Mon Sep 17 00:00:00 2001 > > > From: Jason Wang<jasowang@xxxxxxxxxx> > > > Date: Fri, 18 May 2018 17:33:27 +0800 > > > Subject: [PATCH] vhost: synchronize IOTLB message with dev cleanup > > > > > > DaeRyong Jeong reports a race between vhost_dev_cleanup() and > > > vhost_process_iotlb_msg(): > > > > > > Thread interleaving: > > > CPU0 (vhost_process_iotlb_msg) CPU1 (vhost_dev_cleanup) > > > (In the case of both VHOST_IOTLB_UPDATE and > > > VHOST_IOTLB_INVALIDATE) > > > ===== ===== > > > vhost_umem_clean(dev->iotlb); > > > if (!dev->iotlb) { > > > ret = -EFAULT; > > > break; > > > } > > > dev->iotlb = NULL; > > > > > > The reason is we don't synchronize between them, fixing by protecting > > > vhost_process_iotlb_msg() with dev mutex. > > > > > > Reported-by: DaeRyong Jeong<threeearcat@xxxxxxxxx> > > > Fixes: 6b1e6cc7855b0 ("vhost: new device IOTLB API") > > > Reported-by: DaeRyong Jeong<threeearcat@xxxxxxxxx> > > Long terms we might want to move iotlb into vqs > > so that messages can be processed in parallel. > > Not sure how to do it yet. > > > > Then we probably need to extend IOTLB msg to have a queue idx. But I thinkit > was probably only help if we split tx/rx into separate processes. > > Thanks 3 mutex locks on each access isn't pretty even if done by a single process, but yes - might be more important for scsi. -- MST _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization