在 2023/12/6 下午8:27, Paolo Abeni 写道:
On Tue, 2023-12-05 at 19:05 +0800, Heng Qi wrote:
在 2023/12/5 下午4:35, Jason Wang 写道:
On Tue, Dec 5, 2023 at 4:02 PM Heng Qi <hengqi@xxxxxxxxxxxxxxxxx> wrote:
Currently access to ctrl cmd is globally protected via rtnl_lock and works
fine. But if dim work's access to ctrl cmd also holds rtnl_lock, deadlock
may occur due to cancel_work_sync for dim work.
Can you explain why?
For example, during the bus unbind operation, the following call stack
occurs:
virtnet_remove -> unregister_netdev -> rtnl_lock[1] -> virtnet_close ->
cancel_work_sync -> virtnet_rx_dim_work -> rtnl_lock[2] (deadlock occurs).
Therefore, treating
ctrl cmd as a separate protection object of the lock is the solution and
the basis for the next patch.
Let's don't do that. Reasons are:
1) virtnet_send_command() may wait for cvq commands for an indefinite time
Yes, I took that into consideration. But ndo_set_rx_mode's need for an
atomic
environment rules out the mutex lock.
2) hold locks may complicate the future hardening works around cvq
Agree, but I don't seem to have thought of a better way besides passing
the lock.
Do you have any other better ideas or suggestions?
What about:
- using the rtnl lock only
- virtionet_close() invokes cancel_work(), without flushing the work
- virtnet_remove() calls flush_work() after unregister_netdev(),
outside the rtnl lock
Should prevent both the deadlock and the UaF.
Hi, Paolo and Jason!
Thank you very much for your effective suggestions, but I found another
solution[1],
based on the ideas of rtnl_trylock and refill_work, which works very well:
[1]
+static void virtnet_rx_dim_work(struct work_struct *work)
+{
+ struct dim *dim = container_of(work, struct dim, work);
+ struct receive_queue *rq = container_of(dim,
+ struct receive_queue, dim);
+ struct virtnet_info *vi = rq->vq->vdev->priv;
+ struct net_device *dev = vi->dev;
+ struct dim_cq_moder update_moder;
+ int i, qnum, err;
+
+ if (!rtnl_trylock())
+ return;
+
+ for (i = 0; i < vi->curr_queue_pairs; i++) {
+ rq = &vi->rq[i];
+ dim = &rq->dim;
+ qnum = rq - vi->rq;
+
+ if (!rq->dim_enabled)
+ continue;
+
+ update_moder = net_dim_get_rx_moderation(dim->mode,
dim->profile_ix);
+ if (update_moder.usec != rq->intr_coal.max_usecs ||
+ update_moder.pkts != rq->intr_coal.max_packets) {
+ err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, qnum,
+ update_moder.usec,
+ update_moder.pkts);
+ if (err)
+ pr_debug("%s: Failed to send dim parameters on rxq%d\n",
+ dev->name, qnum);
+ dim->state = DIM_START_MEASURE;
+ }
+ }
+
+ rtnl_unlock();
+}
In addition, other optimizations[2] have been tried, but it may be due
to the sparsely
scheduled work that the retry condition is always satisfied, affecting
performance,
so [1] is the final solution:
[2]
+static void virtnet_rx_dim_work(struct work_struct *work)
+{
+ struct dim *dim = container_of(work, struct dim, work);
+ struct receive_queue *rq = container_of(dim,
+ struct receive_queue, dim);
+ struct virtnet_info *vi = rq->vq->vdev->priv;
+ struct net_device *dev = vi->dev;
+ struct dim_cq_moder update_moder;
+ int i, qnum, err, count;
+
+ if (!rtnl_trylock())
+ return;
+retry:
+ count = vi->curr_queue_pairs;
+ for (i = 0; i < vi->curr_queue_pairs; i++) {
+ rq = &vi->rq[i];
+ dim = &rq->dim;
+ qnum = rq - vi->rq;
+ update_moder = net_dim_get_rx_moderation(dim->mode,
dim->profile_ix);
+ if (update_moder.usec != rq->intr_coal.max_usecs ||
+ update_moder.pkts != rq->intr_coal.max_packets) {
+ --count;
+ err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, qnum,
+ update_moder.usec,
+ update_moder.pkts);
+ if (err)
+ pr_debug("%s: Failed to send dim parameters on rxq%d\n",
+ dev->name, qnum);
+ dim->state = DIM_START_MEASURE;
+ }
+ }
+
+ if (need_resched()) {
+ rtnl_unlock();
+ schedule();
+ }
+
+ if (count)
+ goto retry;
+
+ rtnl_unlock();
+}
Thanks a lot!
Side note: for this specific case any functional test with a
CONFIG_LOCKDEP enabled build should suffice to catch the deadlock
scenario above.
Cheers,
Paolo