Re: Optimal configuration to validate Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you create a pool with size 1 (no replication), (2) should be
somewhere around 3x the speed of (1) assuming the client workload has
enough parallelism and is well distributed over objects (so a random
rbd workload with a large queue depth rather than a small sequential
workload with a small queue depth).  If you have 3x replication on
(2), but 1x on (1), you should expect (2) to be pretty close to (1),
perhaps a bit slower due to replication latency.

The details will actually depend a lot on the workload.  Do you intend
to use rbd?
-Sam

On Fri, Aug 23, 2013 at 2:41 PM, Sushma R <gsushma@xxxxxxxxx> wrote:
> Hi,
>
> I understand that Ceph is a scalable distributed storage architecture.
> However, I'd like to understand if performance on single node cluster is
> better or worse than a 3 node cluster.
> Let's say I have the following 2 setups:
> 1. Single node cluster with one OSD.
> 2. Three node cluster with one OSD on each node.
>
> Would the performance of Setup 2 be approximately (3x) of Setup 1? (OR)
> Would Setup 2 perform better than (3x) Setup 1, because of more parallelism?
> (OR)
> Would Setup 2 perform worse than (3x) Setup 1, because of replication, etc.
>
> In other words, I'm trying to understand do we definitely need more than
> three nodes to validate the benchmark results or a single/two node should
> give an idea of a larger scale?
>
> Thanks,
> Sushma
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux