Re: [PATCH net-next v11 06/23] ovpn: introduce the ovpn_peer object

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21/11/2024 00:22, Sergey Ryazanov wrote:
On 13.11.2024 12:03, Sabrina Dubroca wrote:
2024-11-13, 03:37:13 +0200, Sergey Ryazanov wrote:
On 12.11.2024 19:31, Sabrina Dubroca wrote:
2024-11-10, 15:38:27 +0200, Sergey Ryazanov wrote:
On 29.10.2024 12:47, Antonio Quartulli wrote:
An ovpn_peer object holds the whole status of a remote peer
(regardless whether it is a server or a client).

This includes status for crypto, tx/rx buffers, napi, etc.

Only support for one peer is introduced (P2P mode).
Multi peer support is introduced with a later patch.

Reviewing the peer creation/destroying code I came to a generic question.
Did you consider keeping a single P2P peer in the peers table as well?

Looks like such approach can greatly simply the code by dropping all these 'switch (ovpn->mode)' checks and implementing a unified peer management. The 'peer' field in the main private data structure can be kept to accelerate lookups, still using peers table for management tasks like removing all the
peers on the interface teardown.

It would save a few 'switch(mode)', but force every client to allocate
the hashtable for no reason at all. That tradeoff doesn't look very
beneficial to me, the P2P-specific code is really simple. And if you
keep ovpn->peer to make lookups faster, you're not removing that many
'switch(mode)'.

Looking at the done review, I can retrospectively conclude that I personally
do not like short 'switch' statements and special handlers :)

Seriously, this module has a highest density of switches per KLOC from what I have seen before and a major part of it dedicated to handle the special
case of P2P connection.

I think it's fine. Either way there will be two implementations of
whatever mode-dependent operation needs to be done. switch doesn't
make it more complex than an ops structure.

If you're reading the current version and find ovpn_peer_add, you see
directly that it'll do either ovpn_peer_add_mp or
ovpn_peer_add_p2p. With an ops structure, you'd have a call to
ovpn->ops->peer_add, and you'd have to look up all possible ops
structures to know that it can be either ovpn_peer_add_mp or
ovpn_peer_add_p2p. If there's an undefined number of implementations
living in different modules (like net_device_ops, or L4 protocols),
you don't have a choice.

xfrm went the opposite way to what you're proposing a few years ago
(see commit 0c620e97b349 ("xfrm: remove output indirection from
xfrm_mode") and others), and it made the code simpler.

I checked this. Florian did a nice rework. And the way of implementation looks reasonable since there are more than two encapsulation modes and handling is more complex than just selecting a function to call.

What I don't like about switches, that it requires extra lines of code and pushes an author to introduce a default case with error handling. It was mentioned that the module unlikely going to support more than two modes. In this context shall we consider ternary operator usage. E.g.:

the default case can actually be dropped. That way we can have the compiler warn when one of the enum values is not handled in the switch (should there be a new one at some point). However, the default is just a sanity check against future code changes which may introduce a bug.


next_run = ovpn->mode == OVPN_MODE_P2P ?
            ovpn_peer_keepalive_work_p2p(...) :
            ovpn_peer_keepalive_work_mp(...);

I find this ugly to read :-)
The switch is much more elegant and straightforward.

Do you agree this is getting more into a bike shed coloring discussion? :-D

Since there is not much gain in changing approach, I think it is better if the maintainer picks a style that he finds more suitable (or simply likes more). no?


And back to the hashtable(s) size for the MP mode. 8k-bins table looks a
good choice for a normal server with 1-2Gb uplink serving up to 1k
connections. But it sill unclear, how this choice can affect installations
with a bigger number of connections? Or is this module applicable for
embedded solutions? E.g. running a couple of VPN servers on a home router
with a few actual connections looks like a waste of RAM. I was about to
suggest to use rhashtable due to its dynamic sizing feature, but the module
needs three tables. Any better idea?

For this initial implementation I think it's fine. Sure, converting to
rhashtable (or some other type of dynamically-sized hashtable, if
rhashtable doesn't fit) in the future would make sense. But I don't
think it's necessary to get the patches into net-next.

Agreed. It's in the pipe (along with other features that I have already implemented), but it will come later.

Regards,


Make sense. Thanks for sharing these thoughts.

--
Sergey

--
Antonio Quartulli
OpenVPN Inc.





[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux