Re: Ceph dashboard reports CephNodeNetworkPacketErrors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dominique,

The consistency of the data should not be at risk with such a problem.
But on the other hand, it's better to solve the network problem.

Perhaps look at the state of bond0 :
cat /proc/net/bonding/bond0
As well as the usual network checks
________________________________________________________

Cordialement,

*David CASIER*
________________________________________________________




Le mar. 7 nov. 2023 à 11:20, Dominique Ramaekers <
dominique.ramaekers@xxxxxxxxxx> a écrit :

> Hi,
>
> I'm using Ceph on a 4-host cluster for a year now. I recently discovered
> the Ceph Dashboard :-)
>
> No I see that the Dashboard reports CephNodeNetworkPacketErrors >0.01% or
> >10 packets/s...
>
> Although all systems work great, I'm worried.
>
> 'ip -s link show eno5' results:
> 2: eno5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master
> bond0 state UP mode DEFAULT group default qlen 1000
>     link/ether 7a:3b:79:9c:f6:d1 brd ff:ff:ff:ff:ff:ff permaddr
> 5c:ba:2c:08:b3:90
>     RX:     bytes   packets errors dropped  missed   mcast
>      734153938129 645770129  20160       0       0  342301
>     TX:     bytes   packets errors dropped carrier collsns
>     1085134190597 923843839      0       0       0       0
>     altname enp178s0f0
>
> So in average 0,0003% of RX packet errors!
>
> All the four hosts uses the same 10Gb HP switch. The hosts themselves are
> HP Proliant G10 servers. I would expect 0% packet loss...
>
> Anyway. Should I be worried about data consistency? Or can Ceph handle
> this amount of packet errors?
>
> Greetings,
>
> Dominique.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>

Le mar. 7 nov. 2023 à 11:20, Dominique Ramaekers <
dominique.ramaekers@xxxxxxxxxx> a écrit :

> Hi,
>
> I'm using Ceph on a 4-host cluster for a year now. I recently discovered
> the Ceph Dashboard :-)
>
> No I see that the Dashboard reports CephNodeNetworkPacketErrors >0.01% or
> >10 packets/s...
>
> Although all systems work great, I'm worried.
>
> 'ip -s link show eno5' results:
> 2: eno5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master
> bond0 state UP mode DEFAULT group default qlen 1000
>     link/ether 7a:3b:79:9c:f6:d1 brd ff:ff:ff:ff:ff:ff permaddr
> 5c:ba:2c:08:b3:90
>     RX:     bytes   packets errors dropped  missed   mcast
>      734153938129 645770129  20160       0       0  342301
>     TX:     bytes   packets errors dropped carrier collsns
>     1085134190597 923843839      0       0       0       0
>     altname enp178s0f0
>
> So in average 0,0003% of RX packet errors!
>
> All the four hosts uses the same 10Gb HP switch. The hosts themselves are
> HP Proliant G10 servers. I would expect 0% packet loss...
>
> Anyway. Should I be worried about data consistency? Or can Ceph handle
> this amount of packet errors?
>
> Greetings,
>
> Dominique.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux