On 2/24/23 2:32 PM, Stanislav Fomichev wrote:
+ unsigned int cur_sk;
+ unsigned int end_sk;
+ unsigned int max_sk;
+ struct sock **batch;
+ bool st_bucket_done;
Any change we can generalize some of those across tcp & udp? I haven't
looked too deep, but a lot of things look like a plain copy-paste
from tcp batching. Or not worth it?
The batching has some small but subtle differences between tcp and udp, so not
sure if it can end up sharing enough codes.
static int udp_prog_seq_show(struct bpf_prog *prog, struct bpf_iter_meta *meta,
struct udp_sock *udp_sk, uid_t uid, int bucket)
{
@@ -3172,18 +3307,34 @@ static int bpf_iter_udp_seq_show(struct seq_file *seq,
void *v)
struct bpf_prog *prog;
struct sock *sk = v;
uid_t uid;
+ bool slow;
+ int rc;
if (v == SEQ_START_TOKEN)
return 0;
+ slow = lock_sock_fast(sk);
Hm, I missed the fact that we're already using fast lock in the tcp batching
as well. Should we not use fask locks here? On a loaded system it's
probably fair to pay some backlog processing in the path that goes
over every socket (here)? Martin, WDYT?
hmm... not sure if it is needed. The lock_sock_fast was borrowed from
tcp_get_info() which is also used in inet_diag iteration. bpf iter prog should
be doing something pretty fast also. In the future, it could allow the bpf-iter
program to acquire the lock by itself only when it is necessary if the current
always lock strategy is too expensive.