Re: output discards (queue drops) on switchport

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 08.09.2017 16:25, Burkhard Linke wrote:
>>> Regarding the drops (and without any experience with neither 25GBit ethernet
>>> nor the Arista switches):
>>> Do you have corresponding input drops on the server's network ports?
>> No input drops, just output drop
> Output drops on the switch are related to input drops on the server side. If
> the link uses flow control and the server signals the switch that its internal
> buffer are full, the switch has to drop further packages if the port buffer is
> also filled. If there's no flow control, and the network card is not able to
> store the packet (full buffers...), it should be noted as overrun in the
> interface statistics (and if this is not correct, please correct me, I'm not a
> network guy....).

I cannot see any errors, drops or overruns in the network statistics. Flow
control is off.

I changed the traffic to a 10 GBit/s link via Intel NIC and flow control
enabled. No error on the server but still output drops at switch-side

>>> Did you tune the network settings on server side for high throughput, e.g.
>>> net.ipv4.tcp_rmem, wmem, ...?
>> sysctl tuning is disabled at the moment. I tried sysctl examples from
>> https://fatmin.com/2015/08/19/ceph-tcp-performance-tuning/. But there is still
>> the same amount of output drops.
>>
>>> And are the CPUs fast enough to handle the network traffic?
>> Xeon(R) CPU E5-1660 v4 @ 3.20GHz should be fast enough. But I'm unsure. It's
>> my first Ceph cluster.
> The CPU has 6 cores, and you are driving 2x 10GBit, 2x 25 GBit, the raid
> controller and 8 ssd based osds with it. You can use tools like atop or ntop
> to watch certain aspects of the system during the tests (network, cpu, disk).

It's a 8-core system. Looking at top,iotop & Co. while writing a 10GB file in
within a VM shows no real load.

root@testvm:/home# dd if=/dev/urandom of=urandom.0 bs=10M count=1024
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 52.7775 s, 203 MB/s

The 25G link also is not saturated:

                           rx         |       tx
--------------------------------------+------------------
  bytes                     9.51 GiB  |        3.80 GiB
--------------------------------------+------------------
          max            8.10 Gbit/s  |     2.73 Gbit/s
      average            1.48 Gbit/s  |   590.35 Mbit/s
          min            2.64 Mbit/s  |     2.52 Mbit/s
--------------------------------------+------------------
  packets                    1227630  |          646260
--------------------------------------+------------------
          max             121360 p/s  |       53645 p/s
      average              22733 p/s  |       11967 p/s
          min                278 p/s  |         190 p/s
--------------------------------------+------------------

The drop rate at switchport is around 1%


Regards,
Andreas
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux