Re: rados block on SSD - performance - how to tune and get insight?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On 2/7/19 8:41 AM, Brett Chancellor wrote:
>> This seems right. You are doing a single benchmark from a single client.
>> Your limiting factor will be the network latency. For most networks this
>> is between 0.2 and 0.3ms.  if you're trying to test the potential of
>> your cluster, you'll need multiple workers and clients.
>>
>
> Indeed. To add to this, you will need fast (High clockspeed!) CPUs in
> order to get the latency down. The CPUs will need tuning as well like
> their power profiles and C-States.

Thanks for the insigt, I'm aware and my current CPUs are pretty old
- but I'm also in the process of learning how to make the right
decisions when expanding. If all my time end up being spend in the
client end, then bying NVMe drives does not help me a all nor does
better cpus in the OSDs.

> You won't get the 1:1 performance from the SSDs on your RBD block devices.

I'm full aware of that - Ceph / RBD / etc comes with an awesome feature
packages and that flexibility deliveres overhead and eats into it.
But it helps to deliver "upper bounds" and work my way to good from there.

Thanks.

Jesper


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux