On Mon, Mar 6, 2023 at 7:33 PM Eugenio Perez Martin <eperezma@xxxxxxxxxx> wrote: > > On Mon, Mar 6, 2023 at 4:42 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > On Fri, Mar 3, 2023 at 4:58 PM Eugenio Perez Martin <eperezma@xxxxxxxxxx> wrote: > > > > > > On Fri, Mar 3, 2023 at 4:48 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > > > > > > > > > 在 2023/3/2 03:32, Eugenio Perez Martin 写道: > > > > > On Mon, Feb 27, 2023 at 9:20 AM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > >> On Mon, Feb 27, 2023 at 4:15 PM Jason Wang <jasowang@xxxxxxxxxx> wrote: > > > > >>> > > > > >>> 在 2023/2/24 23:54, Eugenio Pérez 写道: > > > > >>>> A vdpa net device must initialize with SVQ in order to be migratable at > > > > >>>> this moment, and initialization code verifies some conditions. If the > > > > >>>> device is not initialized with the x-svq parameter, it will not expose > > > > >>>> _F_LOG so the vhost subsystem will block VM migration from its > > > > >>>> initialization. > > > > >>>> > > > > >>>> Next patches change this, so we need to verify migration conditions > > > > >>>> differently. > > > > >>>> > > > > >>>> QEMU only supports a subset of net features in SVQ, and it cannot > > > > >>>> migrate state that cannot track or restore in the destination. Add a > > > > >>>> migration blocker if the device offer an unsupported feature. > > > > >>>> > > > > >>>> Signed-off-by: Eugenio Pérez <eperezma@xxxxxxxxxx> > > > > >>>> --- > > > > >>>> v3: add mirgation blocker properly so vhost_dev can handle it. > > > > >>>> --- > > > > >>>> net/vhost-vdpa.c | 12 ++++++++---- > > > > >>>> 1 file changed, 8 insertions(+), 4 deletions(-) > > > > >>>> > > > > >>>> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c > > > > >>>> index 4f983df000..094dc1c2d0 100644 > > > > >>>> --- a/net/vhost-vdpa.c > > > > >>>> +++ b/net/vhost-vdpa.c > > > > >>>> @@ -795,7 +795,8 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer, > > > > >>>> int nvqs, > > > > >>>> bool is_datapath, > > > > >>>> bool svq, > > > > >>>> - struct vhost_vdpa_iova_range iova_range) > > > > >>>> + struct vhost_vdpa_iova_range iova_range, > > > > >>>> + uint64_t features) > > > > >>>> { > > > > >>>> NetClientState *nc = NULL; > > > > >>>> VhostVDPAState *s; > > > > >>>> @@ -818,7 +819,10 @@ static NetClientState *net_vhost_vdpa_init(NetClientState *peer, > > > > >>>> s->vhost_vdpa.shadow_vqs_enabled = svq; > > > > >>>> s->vhost_vdpa.iova_range = iova_range; > > > > >>>> s->vhost_vdpa.shadow_data = svq; > > > > >>>> - if (!is_datapath) { > > > > >>>> + if (queue_pair_index == 0) { > > > > >>>> + vhost_vdpa_net_valid_svq_features(features, > > > > >>>> + &s->vhost_vdpa.migration_blocker); > > > > >>> > > > > >>> Since we do validation at initialization, is this necessary to valid > > > > >>> once again in other places? > > > > >> Ok, after reading patch 13, I think the question is: > > > > >> > > > > >> The validation seems to be independent to net, can we valid it once > > > > >> during vhost_vdpa_init()? > > > > >> > > > > > vhost_vdpa_net_valid_svq_features also checks for net features. In > > > > > particular, all the non transport features must be in > > > > > vdpa_svq_device_features. > > > > > > > > > > This is how we protect that the device / guest will never negotiate > > > > > things like VLAN filtering support, as SVQ still does not know how to > > > > > restore at the destination. > > > > > > > > > > In the VLAN filtering case CVQ is needed to restore VLAN, so it is > > > > > covered by patch 11/15. But other future features may need support for > > > > > restoring it in the destination. > > > > > > > > > > > > I wonder how hard to have a general validation code let net specific > > > > code to advertise a blacklist to avoid code duplication. > > > > > > > > > > A blacklist does not work here, because I don't know if SVQ needs > > > changes for future feature bits that are still not in / proposed to > > > the standard. > > > > Could you give me an example for this? > > > > Maybe I'm not understanding your blacklist proposal. I'm going to > explain my thoughts on it with a few examples. > > SVQ was merged in qemu before VIRTIO_F_RING_RESET, and there are some > proposals like VIRTIO_NET_F_NOTF_COAL or VIRTIO_F_PARTIAL_ORDER in the > virtio-comment list. > > If we had gone with the blacklist approach back then, the blacklist > would contain all the features of Virtio standard but the one we do > support in SVQ, isn't it? Then, VIRTIO_F_RING_RESET would get merged, > and SVQ would claim it supports it, but it is not true. To solve this, the general SVQ code can have a whitelist for transport features? > > The same can happen with the other two features. > VIRTIO_NET_F_NOTF_COAL will be required to migrate coalescence > parameters, but it is not supported for the moment. _F_PARTIAL_ORDER > will also require changes to hw/virtio/vhost-shadow-virtqueue.c code, > since SVQ it's the "driver" in charge of the SVQ vring. > > Most of the changes will only require small changes to support sending > the CVQ message in the destination and to track the state change > parsing CVQ, or no changes at all (like for supporting > VIRTIO_NET_F_SPEED_DUPLEX). But SVQ cannot claim it supports it > anyway. > > The only alternative I can think of is to actually block new proposals > (like past VIRTIO_F_RING_RESET) until they either do the changes on > SVQ too or add a blacklist item. I think it is too intrusive. > Especially on this stage of SVQ where not even all QEMU features are > supported. Maybe we can reevaluate it in Q3 or Q4 for example? Yes, the change is not a must just want to see if we can simply do anything. > > > > > > > > Regarding the code duplication, do you mean to validate transport > > > features and net specific features in one shot, instead of having a > > > dedicated function for SVQ transport? > > > > Nope. > > > > Please expand, maybe I can do something to solve it :). Sure, I just want to make sure we are talking about the same thing before I can expand :) Thanks > > Thanks! > _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization