Re: [PATCH RFC net-next v1 1/6] ethtool: add interface to read Tx hardware timestamping statistics

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2/23/2024 11:24 AM, Rahul Rameshbabu wrote:
> +/**
> + * struct ethtool_ts_stats - HW timestamping statistics
> + * @layer: input field denoting whether stats should be queried from the DMA or
> + *        PHY timestamping layer. Defaults to the active layer for packet
> + *        timestamping.
> + * @tx_stats: struct group for TX HW timestamping
> + *	@pkts: Number of packets successfully timestamped by the queried
> + *	      layer.
> + *	@lost: Number of packet timestamps that failed to get applied on a
> + *	      packet by the queried layer.
> + *	@late: Number of packet timestamps that were delivered by the
> + *	      hardware but were lost due to arriving too late.
> + *	@err: Number of timestamping errors that occurred on the queried
> + *	     layer.
> + */
> +struct ethtool_ts_stats {
> +	enum ethtool_ts_stats_layer layer;
> +	struct_group(tx_stats,
> +		u64 pkts;
> +		u64 lost;
> +		u64 late;
> +		u64 err;
> +	);
> +};

The Intel ice drivers has the following Tx timestamp statistics:

tx_hwtstamp_skipped - indicates when we get a Tx timestamp request but
are unable to fulfill it.
tx_hwtstamp_timeouts - indicates we had a Tx timestamp skb waiting for a
timestamp from hardware but it didn't get received within some internal
time limit.
tx_hwtstamp_flushed - indicates that we flushed an outstanding timestamp
before it completed, such as if the link resets or similar.
tx_hwtstamp_discarded - indicates that we obtained a timestamp from
hardware but were unable to complete it due to invalid cached data used
for timestamp extension.

I think these could be translated roughly to one of the lost, late, or
err stats. I am a bit confused as to how drivers could distinguish
between lost and late, but I guess that depends on the specific hardware
design.

In theory we could keep some of these more detailed stats but I don't
think we strictly need to be as detailed as the ice driver is.

The only major addition I think is the skipped stat, which I would
prefer to have. Perhaps that could be tracked in the netdev layer by
checking whether the skb flags to see whether or not the driver actually
set the appropriate flag?

I think i can otherwise translate the flushed status to the lost
category, the timeout to the late category, and everything else to the
error category. I can easily add a counter to track completed timestamps
as well.

TL;DR; I would like to see a "skipped" category since I think that
should be distinguished from general errors.

Thanks!




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux