Hi! I'm currently facing a problem whose analysis would require to understand the latency of how long a DATA chunk stays in the socket receive queue. Basically I'm trying to figure out the time between 1) the kernel SCTP stack making data available to the process (socket becomes readable) 2) the process actually performing a sctp_recvmsg to retrieve the data. Ideally I'd like to see a histogram of those latencies to understand if there's anything happening in the application process that causes delayed reads. If anyone has encountered the same situation and/or is familiar with a solution, I'd appreciate any pointers. I suppose I could do kprobe+kretprobe on sctp_poll() in order to determine when the socket becomes readable (return value & EPOLLIN)? Then store the timestamp in a per-socket map containing a per-skb map containing the timestamps? But then kprobe/kretprobe for sctp_recvmsg won't be sufficient as I cannot access the skb at either the function entry nor the function exit... only somewhere inside the function. So it looks like a dead end? Thanks in advance. -- - Harald Welte <laforge@xxxxxxxxxxxx> https://laforge.gnumonks.org/ ============================================================================ "Privacy in residential applications is a desirable marketing option." (ETSI EN 300 175-7 Ch. A6)