On Fri, May 14, 2021 at 11:55:29AM -0700, Jiang Wang . wrote:
On Fri, May 14, 2021 at 8:17 AM Stefano Garzarella <sgarzare@xxxxxxxxxx> wrote:
On Thu, May 13, 2021 at 04:26:03PM -0700, Jiang Wang . wrote:
[...]
>I see. I will add some limit to dgram packets. Also, when the
>virtqueues
>are shared between stream and dgram, both of them need to grab a lock
>before using the virtqueue, so one will not completely block another one.
I'm not worried about the concurrent access that we definitely need to
handle with a lock, but more about the uncontrolled packet sending that
dgram might have, flooding the queues and preventing others from
communicating.
That is a valid concern. Let me explain how I would handle that if we
don't add two new virtqueues. For dgram, I also add a dgram_send_pkt_list,
which is similar to send_pkt_list for stream (and seqpacket). But there
is one difference. The dgram_send_pkt_list has a maximum size setting,
and keep tracking how many pkts are in the list. The track number
(dgram_send_pkt_list_size) is increased when a packet is added
to the list and is decreased when a packet
is removed from the list and added to the virtqueue. In
virtio_transport_send_pkt, if the current
dgram_send_pkt_list_size is equal
to the maximum ( let's say 128), then it will not add to the
dgram_send_pkt_list and return an error to the application.
For stream socket, we have the send_pkt_list and the send worker because
the virtqueue can be full and the transmitter needs to wait available
slots, because we can't discard packets.
For dgram I think we don't need this, so we can avoid the
dgram_send_pkt_list and directly enqueue packets in the virtqueue.
If there are no slots available, we can simply drop the packet.
In this way we don't have to put limits other than the available space
in the virtqueue.
In this way, the number of pending dgram pkts to be sent is limited.
Then both stream and dgram sockets will compete to hold a lock
for the tx virtqueue. Depending on the linux scheduler, this competition
will be somewhat fair. As a result, dgram will not block stream completely.
It will compete and share the virtqueue with stream, but stream
can still send some pkts.
Basically, the virtqueue becomes a first-come first-serve resource for
the stream and dgram. When both stream and dgram applications
have lots of data to send, dgram_send_pkt_list and send_pkt_list
will still be a limited size, and each will have some chance to send out
the data via virtqueue. Does this address your concern?
So having 2 dedicated queues could avoid a credit mechanism at all for
connection-less sockets, and simply the receiver discards packets that
it can't handle.
With 2 dedicated queues, we still need some kind of accounting for
the dgram. Like the dgram_send_pkt_size I mentioned above. Otherwise,
it will cause OOM. It is not a real credit mechanism, but code is similar
with or without 2 dedicated queues in my current prototype.
I think in both cases we don't need an accounting, but we can drop
packets if the virtqueue is full.
It's still not clear to me where OOM may occur. Can you elaborate on
this?
For receiver discarding packets part, could you explain more? I think
receiver discarding pkts code also is same with or without 2 dedicated
queues.
Yep.
Both will use can_enqueue = virtio_transport_inc_rx_pkt(vvs, pkt);
to check if a packet should be discarded or not.
I don't think we can use virtio_transport_inc_rx_pkt() for dgram.
This is based on the credit mechanism (e.g. buf_alloc) that works when
you have a connection oriented socket.
With dgram you can have multiple transmitter for a single socket, so how
we can exchange that information?
If we reuse the VMCI implementation with skbuff, the sk_receive_skb()
already checks if the sk_rcvbuf is full and discards the packet in this
case, so we don't need an internal rx_queue.
Maybe someday we should convert stream to skbuff too, it would make our
lives easier.
Thanks,
Stefano
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization