Re: Ceph Performance very bad even in Memory?!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Marc,

Thanks for your response. Wrote this email early in the morning, spending
the whole night and the last two weeks on benchmarking ceph.

The main reason im spending days on it, is that i have poor performance
with about 25 nvme disks and i went a long long road with hunderts of
benchmarks, tutorials, forum entries and many more to end here.

To be sure that the disks itself (the hardware) is not the issue im using
finally this in memory test to see what i could expect from my servers and
Networks performance.

*> Why are you testing with one osd? You do not need ceph if you are only
having one disk.*

*As i wrote there are 3 servers, each one has a single OSD-Ramdrive
attached. You are completly right and for sure I also found on my road
about attaching multiple osd on a single drive (even so thats even more a
proof that there is huge bottlenecks in ceph).*

*But:*
*This does not and can not improve latency and thats the Problem and
bottleneck here. *

*Anyways i also attached in my test multiple ram drives as well as multiple
partitions per ramdrice. Sadly the result is the same about latency and
bandwidth.*

*Most blog entries, forum researchs and tutorialy complaining in this stage
now about*
*the poor hardware and latency which comes from the drives. Im not allowing
this simple excuse for everything and to exclude this noice im using
ramdrives and started my issue about the ramdrives performance.*


> You have to look at ceph differently. What kind of implementation are you
going to use if you have 100 disks spread across 8 servers.

Im with you. For using spinning disks (which im using as well) this is a
beatiful thing. But the performance (in a all flash environment) is already
bottlenecked when you put more then 4-5 drives into a single server. 40
Gbit/s / 5 GB/s is pretty fast saturated by nowadays nvme's. Thats btw the
cluster i planed (5 nvme / 40 gbit) and now struggeling with super bad
performance or lets say performance that bad i was from the theorie of ceph
not expecting.

> Read and do research before you do anything?
https://yourcmc.ru/wiki/Ceph_performance

This wiki was on my way as well. I went through all his optimisations. But
this wiki here has a huge Problem as well - in the end it compains mostly
about the disks. Nobody ever is looking at the software.

The only guy i found was this one here:
https://chowdera.com/2021/07/20210714124517948b.html

Sadly it seems he fixed some bluestore performance bottlenecks, which is
not working in the current version of ceph.

By looking now into your wiki reference, by having blazing fast ram drives,
a network 40 gbit fast, servers cpu/network is never under load and every
service of ceph in memory.

Could you point my why exactly the latency of single read/writes could be
that bad? Also why theres only overall 40k IO/s?

It must be very bad performing OSD software..

Marc <Marc@xxxxxxxxxxxxxxxxx> schrieb am So., 30. Jan. 2022, 12:22:

>
>
> >
> > The benchmark was monitored by using this tool here:
> > https://github.com/ceph/ceph/blob/master/src/tools/histogram_dump.py
> also
> > by looking at the raw data of  "ceph daemon osd.7 perf dump".
>
> Why are you testing with one osd? You do not need ceph if you are only
> having one disk.
>
> You have to look at ceph differently. What kind of implementation are you
> going to use if you have 100 disks spread across 8 servers.
>
> >
> > *Result:*
> > Either there is something really going wrong or ceph has huge bottlenecks
> > inside the software which should be solved...
> >
>
> Sorry for not looking at your results, I think I can imagine what they are.
>
>
> >
> > What did I do wrong? What's going on here?!
> > Please help me out!
>
> Read and do research before you do anything?
> https://yourcmc.ru/wiki/Ceph_performance
>
> But I have to admit performance should be much clearer advertised. I
> already suggested years ago on such a ceph day, to publish general test
> results so new users know what to expect.
>
> So now you know what is 'bad', I will promise everything else you do with
> ceph (that is relevant) will put a smile on your face. Happy cephing!!! ;)
>
>
>
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux