Re: Newbie question re: ceph performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/01/2013 06:07 AM, Papaspyrou, Alexander wrote:
Folks,

we are in the process of setting up a ceph cluster with about 40 OSDs
spread over 25 or so machines within our hosting provider's infrastructure.

Unfortunately, we have certain limitations from the provider side that
we cannot really overcome:

1- We only have one public network, no cluster network, and would like
to host OpenStack Glance and Cinder over ceph RBD. Are we going to
experience obvious performance problems (aka. does not make sense to
  bother setting up the whole thing)? Networking bandwidth is up to 1GBit/s.

Latency spikes can be an issue depending on how the network is setup. You may just need to test it and see how it goes. What kind of performance would you like to see?


2. Networking is "100% switched, no collision domains", as our provider
says. They won't really tell us what that means (for security reasons,
whoohooo…), but I guess that they isolate hosts from each other within
the same subnet to ensure that you cannot sniff another tenant's
traffic. Is this going to be a problem with ceph?

I don't think it should so long as all of the ports being used are open.


3. How many MONs do we really need to have for such a setup? The
machines running the MONs are quite powerful (with 16GB RAM and eight
cores), and we are planning to use them for ceph (and maybe Messaging
with RabbitMQ). Is this realistic, oversized, or less than we need.

1 should be fine for testing, but I'd suggest you run with 3 in production for extra redundancy.


I don't really look for precise statistics (I couldn't provide all
parameters anyway at the moment), just for the gut feeling of the more
experienced users here…

Ceph will run on almost any combination of hardware, but from a performance perspective it loves the OSD nodes to be homogeneous with stable throughput and latency (both disk and network). The more consistently the OSDs perform, the better the overall cluster throughput will be.


Thanks,
Alexander

--
*adesso mobile solutions GmbH*
Alexander Papaspyrou
System Architect
IT Operations

Stockholmer Allee 24 | 44269 Dortmund
T +49 231 930 66480 | F +49 231 930 9317
Mail:papaspyrou@xxxxxxxxxxxxxxxx <mailto:papaspyrou@xxxxxxxxxxxxxxxx>|
Web:www.adesso-mobile.de <http://www.adesso-mobile.de/> | Mobil-Web:
mobil.adesso-mobile.de <http://mobil.adesso-mobile.de/>
Vertretungsberechtigte Geschäftsführer: Dr. Josef Brewing, Frank Dobelmann
Registergericht: Amtsgericht Dortmund
Registernummer: HRB 13763
Umsatzsteuer-Identifikationsnummer gemäß § 27 a Umsatzsteuergesetz:
DE201541832



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux