On Thu, Mar 07, 2019 at 10:47:22AM -0500, Michael S. Tsirkin wrote: > On Wed, Mar 06, 2019 at 02:18:12AM -0500, Jason Wang wrote: > > +static const struct mmu_notifier_ops vhost_mmu_notifier_ops = { > > + .invalidate_range = vhost_invalidate_range, > > +}; > > + > > void vhost_dev_init(struct vhost_dev *dev, > > struct vhost_virtqueue **vqs, int nvqs, int iov_limit) > > { > > I also wonder here: when page is write protected then > it does not look like .invalidate_range is invoked. > > E.g. mm/ksm.c calls > > mmu_notifier_invalidate_range_start and > mmu_notifier_invalidate_range_end but not mmu_notifier_invalidate_range. > > Similarly, rmap in page_mkclean_one will not call > mmu_notifier_invalidate_range. > > If I'm right vhost won't get notified when page is write-protected since you > didn't install start/end notifiers. Note that end notifier can be called > with page locked, so it's not as straight-forward as just adding a call. > Writing into a write-protected page isn't a good idea. > > Note that documentation says: > it is fine to delay the mmu_notifier_invalidate_range > call to mmu_notifier_invalidate_range_end() outside the page table lock. > implying it's called just later. OK I missed the fact that _end actually calls mmu_notifier_invalidate_range internally. So that part is fine but the fact that you are trying to take page lock under VQ mutex and take same mutex within notifier probably means it's broken for ksm and rmap at least since these call invalidate with lock taken. And generally, Andrea told me offline one can not take mutex under the notifier callback. I CC'd Andrea for why. That's a separate issue from set_page_dirty when memory is file backed. It's because of all these issues that I preferred just accessing userspace memory and handling faults. Unfortunately there does not appear to exist an API that whitelists a specific driver along the lines of "I checked this code for speculative info leaks, don't add barriers on data path please". > -- > MST