RE: EC backend benchmark

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Loic..
<< inline

Regards
Somnath
-----Original Message-----
From: Loic Dachary [mailto:loic@xxxxxxxxxxx]
Sent: Monday, May 11, 2015 3:02 PM
To: Somnath Roy
Cc: ceph-users@xxxxxxxxxxxxxx; Ceph Development
Subject: Re: EC backend benchmark

Hi,
[Sorry I missed the body of your questions, here is my answer ;-]

On 11/05/2015 23:13, Somnath Roy wrote:> Summary :
>
> -------------
>
>
>
> 1. It is doing pretty good in Reads and 4 Rados Bench clients are saturating 40 GB network. With more physical server, it is scaling almost linearly and saturating 40 GbE on both the host.
>
>
>
> 2. As suspected with Ceph, problem is again with writes. Throughput wise it is beating replicated pools in significant numbers. But, it is not scaling with multiple clients and not saturating anything.
>
>
>
>
>
> So, my question is the following.
>
>
>
> 1. Probably, nothing to do with EC backend, we are suffering because of filestore inefficiencies. Do you think any tunable like EC stipe size (or anything else) will help here ?

I think Mark Nelson would be in a better position that me to answer as he has conducted many experiments with erasure coded pools.

[Somnath] Sure, Mark, any insight  ? :-)

> 2. I couldn't make fault domain as 'host', because of HW limitation. Do you think will that play a role in performance for bigger k values ?

I don't see a reason why there would be a direct relationship between the failure domain and the values of k. Do you have a specific example in mind ?

[Somnath] Nope, other than more network hops..If failure domain is OSD, more than one chunks could be within a host...But, since I have 40GbE and not saturating network BW (for bigger cluster that probability is less), IMO it shouldn't matter. I thought of checking with you.

> 3. Even though it is not saturating 40 GbE for writes, do you think separating out public/private network will help in terms of performance ?

I don't think so. What is the bottleneck ? CPU or disk I/O ?

[Somnath] For write, no resources (cpu/network/disk) are saturating.

Cheers

--
Loïc Dachary, Artisan Logiciel Libre



________________________________

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux