On 1/10/2022 10:26 PM, Parav Pandit wrote:
From: Jason Wang <jasowang@xxxxxxxxxx>
Sent: Tuesday, January 11, 2022 10:17 AM
I guess in this situation it would be better defer to the future patch
to add such locking or wrapper, although right now there are just two
additional calls taking the lock needlessly when vhost_vdpa_get_status
is called. Maybe it's just me but I'm worried some future patch may
calls the locked API wrapper thousands of times unintendedly, then the
overhead becomes quite obvious.
I'm fine to remove this if others agree on this.
Ok, but IMHO it might be better to get some comment here, otherwise
it's quite confusing why the lock needs to be held. vhost_vdpa had
done its own serialization in vhost_vdpa_unlocked_ioctl() through
vhost_dev`mutex.
Right, but they are done for different levels, one is for vhost_dev
antoher is for vdpa_dev.
The cf_mutex is introduced to serialize the command line configure
thread (via netlink) and the upstream driver calls from either the
vhost_dev or virtio_dev. I don't see a case, even in future, where the
netlink thread needs to update the virtio status on the fly. Can you
enlighten me why that is at all possible?
Sorry for my late response.
Netlink reads the whole space while it is not getting modified by vhost/virtio driver side.
A better to convert cf_mutex to rw_sem that clarifies the code more and still ensure that netlink doesn't read bytes while it is getting modified.
I missed the best timing for asking why it was not a rw_sem in the first
place, but I don't mind multi-reader's case too much, so it's not a
hurry for me to make the convert.
-Siwei
Given that config bytes can be updated anytime and not on each field boundary, it is anyway not atomic.
It was added where we wanted to modify the fields post creation, but eventually drop that idea.
So yes, cf_mutex removal is fine too.
After some thought I don't see a case. I can think of NEEDS_RESET but it should
come with a config interrupt so we're probably fine without the mutex here.
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization