Re: Massive performance issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> 
> ----- Le 14 Mar 25, à 8:40, joachim kraftmayer joachim.kraftmayer@xxxxxxxxx a écrit :
> 
>> Hi Thomas & Anthony,
>> 
>> Anthony provided great recommendations.

Danke!

>> ssd read performance:
>> I find the total number of pg per ssd osd too low; it can be twice as high.

They’re short on RAM, though, so I hesitate to suggest that.  YMMV.

>> 
>> hdd read performance
>> What makes me a little suspicious is that the maximum throughput of about
>> 120 MB/s is exactly the maximum of a 1 Gbit/s connection.

Modulo framing bits and collisions, but good observation.  I’ve seen a K8s deployment where some workers anomalously had a /32 PREFIX in interface configurations that sent intra-subnet traffic up to the router for no good reason.

>> (I have seen this in the past if the routing is not correct and if you use
>> VMs for testing the network could be limited.)

Now that I’m not typing on my phone … this informs checking the networking on all nodes:

Assuming that your links are
bonded for only a public network:

* inspect /proc/net/bonding/bond0 on every host, maybe ethtool on each physical interface as well.
* Are both links active?
* Are their link speeds as expected? 
* Use iftop or other tool to see if both links are getting very roughly comparable traffic.
* Does netstat -i show a high number of errors?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux