Re: WARNING in __mmdrop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2019/7/29 下午4:59, Michael S. Tsirkin wrote:
On Mon, Jul 29, 2019 at 01:54:49PM +0800, Jason Wang wrote:
On 2019/7/26 下午9:49, Michael S. Tsirkin wrote:
Ok, let me retry if necessary (but I do remember I end up with deadlocks
last try).
Ok, I play a little with this. And it works so far. Will do more testing
tomorrow.

One reason could be I switch to use get_user_pages_fast() to
__get_user_pages_fast() which doesn't need mmap_sem.

Thanks
OK that sounds good. If we also set a flag to make
vhost_exceeds_weight exit, then I think it will be all good.

After some experiments, I came up two methods:

1) switch to use vq->mutex, then we must take the vq lock during range
checking (but I don't see obvious slowdown for 16vcpus + 16queues). Setting
flags during weight check should work but it still can't address the worst
case: wait for the page to be swapped in. Is this acceptable?

2) using current RCU but replace synchronize_rcu() with vhost_work_flush().
The worst case is the same as 1) but we can check range without holding any
locks.

Which one did you prefer?

Thanks
I would rather we start with 1 and switch to 2 after we
can show some gain.

But the worst case needs to be addressed.


Yes.


How about sending a signal to
the vhost thread?  We will need to fix up error handling (I think that
at the moment it will error out in that case, handling this as EFAULT -
and we don't want to drop packets if we can help it, and surely not
enter any error states.  In particular it might be especially tricky if
we wrote into userspace memory and are now trying to log the write.
I guess we can disable the optimization if log is enabled?).


This may work but requires a lot of changes. And actually it's the price of using vq mutex. Actually, the critical section should be rather small, e.g just inside memory accessors.

I wonder whether or not just do synchronize our self like:

static void inline vhost_inc_vq_ref(struct vhost_virtqueue *vq)
{
        int ref = READ_ONCE(vq->ref);

        WRITE_ONCE(vq->ref, ref + 1);
smp_rmb();
}

static void inline vhost_dec_vq_ref(struct vhost_virtqueue *vq)
{
        int ref = READ_ONCE(vq->ref);

smp_wmb();
        WRITE_ONCE(vq->ref, ref - 1);
}

static void inline vhost_wait_for_ref(struct vhost_virtqueue *vq)
{
        while (READ_ONCE(vq->ref));
mb();
}


Or using smp_load_acquire()/smp_store_release() instead?

Thanks





[Index of Archives]     [Linux SoC]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux