Re: Fwd: nvsp_rndis_pkt_complete error status and net_ratelimit: callbacks suppressed messages on 6.4.0rc4

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Thorsten here, the Linux kernel's regression tracker.

On 30.05.23 14:25, Bagas Sanjaya wrote:
> 
> I notice a regression report on Bugzilla [1]. Quoting from it:

Hmmm, nobody replied to this yet (or am I missing something?). Doesn't
seems like it's something urgent, but nevertheless:

Michael, Bagas didn't make it obvious at all, hence please allow me to
ask: did you notice that this is a regression that is apparently caused
by a commit of yours?

Ciao, Thorsten (wearing his 'the Linux kernel's regression tracker' hat)
--
Everything you wanna know about Linux kernel regression tracking:
https://linux-regtracking.leemhuis.info/about/#tldr
If I did something stupid, please tell me, as explained on that page.

#regzbot poke

>> After building 6.4.0rc4 for my VM running on a Windows 10 Hyper-V host, I see the following 
>>
>> [  756.697753] net_ratelimit: 34 callbacks suppressed
>> [  756.697806] hv_netvsc cd9dd876-2fa9-4764-baa7-b44482f85f9f eth0: nvsp_rndis_pkt_complete error status: 2
>> (snipped repeated messages)
>>
>> *but*, I'm only able to reliably reproduce this if I am generating garbage on another terminal, e.g. sudo strings /dev/sda
>>
>>
>> This doesn't appear to affect latency or bandwidth a huge amount, I ran an iperf3 test between the guest and the host while trying to cause these messages.
>> Although you if you take 17-18 gigabit as the "base" speed, you can see it drop a bit to 16 gigabit while the errors happen and "catch up" when I stop spamming the console.
>>
>> [  5]  99.00-100.00 sec  1.89 GBytes  16.2 Gbits/sec
>> [  5] 100.00-101.00 sec  1.91 GBytes  16.4 Gbits/sec
>> [  5] 101.00-102.00 sec  1.91 GBytes  16.4 Gbits/sec
>> [  5] 102.00-103.00 sec  1.91 GBytes  16.4 Gbits/sec
>> [  5] 103.00-104.00 sec  1.92 GBytes  16.5 Gbits/sec
>> [  5] 104.00-105.00 sec  1.94 GBytes  16.6 Gbits/sec
>> [  5] 105.00-106.00 sec  1.89 GBytes  16.2 Gbits/sec
>> [  5] 106.00-107.00 sec  1.90 GBytes  16.3 Gbits/sec
>> [  5] 107.00-108.00 sec  2.23 GBytes  19.2 Gbits/sec
>> [  5] 108.00-109.00 sec  2.57 GBytes  22.0 Gbits/sec
>> [  5] 109.00-110.00 sec  2.66 GBytes  22.9 Gbits/sec
>> [  5] 110.00-111.00 sec  2.64 GBytes  22.7 Gbits/sec
>> [  5] 111.00-112.00 sec  2.65 GBytes  22.7 Gbits/sec
>> [  5] 112.00-113.00 sec  2.65 GBytes  22.8 Gbits/sec
>> [  5] 113.00-114.00 sec  2.65 GBytes  22.8 Gbits/sec
>> [  5] 114.00-115.00 sec  2.65 GBytes  22.8 Gbits/sec
>> [  5] 115.00-116.00 sec  2.66 GBytes  22.9 Gbits/sec
>> [  5] 116.00-117.00 sec  2.63 GBytes  22.6 Gbits/sec
>> [  5] 117.00-118.00 sec  2.69 GBytes  23.1 Gbits/sec
>> [  5] 118.00-119.00 sec  2.66 GBytes  22.9 Gbits/sec
>> [  5] 119.00-120.00 sec  2.67 GBytes  22.9 Gbits/sec
>> [  5] 120.00-121.00 sec  2.66 GBytes  22.9 Gbits/sec
>> [  5] 121.00-122.00 sec  2.49 GBytes  21.4 Gbits/sec
>> [  5] 122.00-123.00 sec  2.15 GBytes  18.5 Gbits/sec
>> [  5] 123.00-124.00 sec  2.16 GBytes  18.6 Gbits/sec
>> [  5] 124.00-125.00 sec  2.16 GBytes  18.6 Gbits/sec
>>
> 
> See bugzilla for the full thread.
> 
> Anyway, I'm adding it to regzbot:
> 
> #regzbot introduced: dca5161f9bd052 https://bugzilla.kernel.org/show_bug.cgi?id=217503
> #regzbot title: net_ratelimit and nvsp_rndis_pkt_complete error due to SEND_RNDIS_PKT status check
> 
> Thanks.
> 
> [1]: https://bugzilla.kernel.org/show_bug.cgi?id=217503
> 




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux