Re: Setting up a small experimental CEPH network

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-09-20 12:19, Marc Roos wrote:
> 
> 
> - pat yourself on the back for choosing ceph, there are a lot of 
> experts(not including me :)) here willing to help(during office hours)
> - decide what you like to use ceph for, and how much storage you need.
> - Running just an osd on a server has not that many implications so you 
> could rethink your test environment 
> - Read here about when you need high frequency cpu's, cores, how much 
> GB's of ram per osd/TB,
> - Don't think having a few 1Gbit can replace a >10Gbit. Ceph doesn't use 
> such bonds optimal. I already asked about this years ago. Having a 10Gbe 
> might make a SBC solution more costly than estimated.

My experience with bonding and Ceph is pretty good (OpenvSwitch). Ceph
uses lots of tcp connections, and those can get shifted (balanced)
between interfaces depending on load.

The latency improvement of 10 Gb/s (and faster) is the main advantage
IMHO. Sure Ceph wants plenty of bandwith when you want to rebalance /
backfill / recover your cluster. But it really depends on your
expectations and what you want to achieve. A three node SolidRun
Honeycomb LX2K cluster [1] should be able to run anything you want and
provide decent performance.

But Ceph works best when you scale horizontally ...  you might be
surprised what aggregated throughput you can get from a lot of small nodes.

Gr. Stefan

[1]: https://www.solid-run.com/nxp-lx2160a-family/honeycomb-workstation/
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux