> From: Michal Kubecek [mailto:mkubecek@xxxxxxx] > > ...... > > +static struct sock *hvsock_find_connected_socket_by_channel( > > + const struct vmbus_channel *channel) > > +{ > > + struct hvsock_sock *hvsk; > > + > > + list_for_each_entry(hvsk, &hvsock_connected_list, connected_list) { > > + if (hvsk->channel == channel) > > + return hvsock_to_sk(hvsk); > > + } > > + return NULL; > > +} > > How does this work from performance point of view if there are many > connected sockets and/or high frequency of new connections? AFAICS most > other families use a hash table for socket lookup. Hi Michal, Per the current design of the feature in the host, there is actually an implicit inherent limit of the number of the per-guest connections: a guest can't have more than 2048 connections. This is because 1 connection takes a VMBus channel ID and at most 2048 channel IDs per guest are supported. And I don't think the lookup function is a bottleneck because the whole process of creating or closing a connection is actually doing lots of things, which need several extra rounds of interactions between the host and the guest, taking much more cycles than the lookup here. > > +static void get_ringbuffer_rw_status(struct vmbus_channel *channel, > > + bool *can_read, bool *can_write) > > ...... > > + if (can_write) { > > + hv_get_ringbuffer_availbytes(&channel->outbound, > > + &dummy, > > + &avl_write_bytes); > > + > > + /* We only write if there is enough space */ > > + *can_write = avl_write_bytes > HVSOCK_PKT_LEN(PAGE_SIZE); > > I'm not sure where does this come from but is this really supposed to be > PAGE_SIZE (not the fixed 4KB PAGE_SIZE_4K)? Thanks for pointing this out! I'll replace it with PAGE_SIZE_4K. > > + /* see get_ringbuffer_rw_status() */ > > + set_channel_pending_send_size(channel, HVSOCK_PKT_LEN(PAGE_SIZE) > + 1); > > Same question. I'll replace it with PAGE_SIZE_4K too. > > +static int hvsock_create_sock(struct net *net, struct socket *sock, > > + int protocol, int kern) > > +{ > > + struct sock *sk; > > + > > + if (!capable(CAP_SYS_ADMIN) && !capable(CAP_NET_ADMIN)) > > + return -EPERM; > > Looks like any application wanting to use hyper-v sockets will need > rather high privileges. It would make sense if these sockets were > reserved for privileged tasks like VM management. But according to the > commit message, hv_sock is supposed to be used for regular application > to application communication. Requiring CAP_{SYS,NET}_ADMIN looks like > an overkill to me. I agree with you. Let me remove this check. BTW, the check was supposed to prevent regular app from using the socket, because the current design by the host has a drawback: a connection consumes at least 40KB unswapable memory as the host<->guest shared ring and we don't want malicious regular apps to be able to consume all the memory. Later I realized the per-guest number of connections couldn't exceed 2048, so at most the host<->guest rings consume 2K * 40KB = 80MB memory and this isn't a big concern to me. Thanks, -- Dexuan _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel