On Thu, Oct 12, 2023 at 9:43 AM Xianting Tian <xianting.tian@xxxxxxxxxxxxxxxxx> wrote: > > cgroup attach work and dev flush work will both be added to dev work > list in vhost_attach_cgroups() when set dev owner: > static int vhost_attach_cgroups(struct vhost_dev *dev) > { > struct vhost_attach_cgroups_struct attach; > > attach.owner = current; > vhost_work_init(&attach.work, > vhost_attach_cgroups_work); > vhost_work_queue(dev, &attach.work); // add cgroup > attach work > vhost_work_dev_flush(dev); // add dev > flush work > return attach.ret; > } > > And dev kworker will be waken up to handle the two works in > vhost_worker(): > node = llist_del_all(&dev->work_list); > node = llist_reverse_order(node); > llist_for_each_entry_safe{ > work->fn(work); > } > > As the list is reversed before processing in vhost_worker(), so it is > possible > that dev flush work is processed before cgroup attach work. This sounds weird. It's llist not list so when adding the new entry was added to the head that why we need llist_reverse_order() to recover the order. Have you ever reproduced these issues? Thanks > If so, > vhost_attach_cgroups > may return "attach.ret" before cgroup attach work is handled, but > "attach.ret" is random > value as it is in stack. > > The possible fix maybe: > > static int vhost_attach_cgroups(struct vhost_dev *dev) > { > struct vhost_attach_cgroups_struct attach; > > attach.ret = 0; > attach.owner = current; > vhost_work_init(&attach.work, vhost_attach_cgroups_work); > vhost_work_queue(dev, &attach.work); > vhost_work_dev_flush(dev); > return attach.ret; > } > > So this fix is just to initialize the attach.ret to 0, this fix may > not the final fix, > We just want you experts know this issue exists, and we met it > recently in our test. > > And the issue exists in may stable branches. > _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization