Re: small cluster HW upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
You can optimize ceph-osd for this of course. It would benefit people 
that like to use the 1Gbit connections. I can understand putting time 
into it now does not make sense because of the availability of 10Gbit. 
However, I do not get why this was not optimized already 5 or 10 years 
ago. 


-----Original Message-----
Cc: ceph-users; mrxlazuardin
Subject: *****SPAM***** Re:  Re: small cluster HW upgrade

This is a natural condition of bonding, it has little to do with 
ceph-osd.

Make sure your hash policy is set appropriatelly, so that you even have 
a chance of using both links.

https://support.packet.com/kb/articles/lacp-bonding

The larger the set of destinations, the more likely you are to spread 
traffic across both links.



> Osd's do not even use bonding effenciently. If it were to use 2 links 
> concurrently it would be a lot better.
> 
> https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg35474.html
> 
> 
> 
> -----Original Message-----
> To: ceph-users@xxxxxxx
> Subject:  Re: small cluster HW upgrade
> 
> Hi Philipp,
> 
> More nodes is better, more availability, more CPU and more RAM. But, 
> I'm agree that your 1GbE link will be most limiting factor, especially 

> if there are some SSDs. I suggest you upgrade your networking to 10GbE 

> (or 25GbE since it will cost you nearly same with 10GbE). Upgrading 
> your networking is better than using bonding since bonding cannot have 

> 100% of total links bandwidth.
> 
> Best regards,
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux