Re: Optimal configuration to validate Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Up to the point that you saturate the network, sure.  Note that rados
bench defaults to 16 writes at a time, so I would not expect a single
rados bench client with 16 concurrent writes to show linear scaling
past 16 osds (perhaps 32 if you have replication enabled).  For larger
numbers of osds, you'll need more concurrent writes (either more
clients, or a larger number of outstanding writes on each client).
-Sam

On Mon, Aug 26, 2013 at 3:51 PM, Sushma R <gsushma@xxxxxxxxx> wrote:
> Thanks for the response.
> Yes, we intend to use rbd and radosgw eventually.
> However, for evaluation we are using rados bench and we are getting
> performance of ~50 MB/sec with a single OSD (with SSDs). We added more OSDs
> to the same server and the performance scales linearly.
> Can we assume that the performance with multiple OSDs on a "single" server
> (without saturating CPU utilization) would be the best compared to the
> multiple OSDs on multiple servers, since there are no network latencies
> involved?
>
>
>
> On Mon, Aug 26, 2013 at 1:47 PM, Samuel Just <sam.just@xxxxxxxxxxx> wrote:
>>
>> If you create a pool with size 1 (no replication), (2) should be
>> somewhere around 3x the speed of (1) assuming the client workload has
>> enough parallelism and is well distributed over objects (so a random
>> rbd workload with a large queue depth rather than a small sequential
>> workload with a small queue depth).  If you have 3x replication on
>> (2), but 1x on (1), you should expect (2) to be pretty close to (1),
>> perhaps a bit slower due to replication latency.
>>
>> The details will actually depend a lot on the workload.  Do you intend
>> to use rbd?
>> -Sam
>>
>> On Fri, Aug 23, 2013 at 2:41 PM, Sushma R <gsushma@xxxxxxxxx> wrote:
>> > Hi,
>> >
>> > I understand that Ceph is a scalable distributed storage architecture.
>> > However, I'd like to understand if performance on single node cluster is
>> > better or worse than a 3 node cluster.
>> > Let's say I have the following 2 setups:
>> > 1. Single node cluster with one OSD.
>> > 2. Three node cluster with one OSD on each node.
>> >
>> > Would the performance of Setup 2 be approximately (3x) of Setup 1? (OR)
>> > Would Setup 2 perform better than (3x) Setup 1, because of more
>> > parallelism?
>> > (OR)
>> > Would Setup 2 perform worse than (3x) Setup 1, because of replication,
>> > etc.
>> >
>> > In other words, I'm trying to understand do we definitely need more than
>> > three nodes to validate the benchmark results or a single/two node
>> > should
>> > give an idea of a larger scale?
>> >
>> > Thanks,
>> > Sushma
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux