Re: Ceph Performance very bad even in Memory?!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Marc,

I think you misread this. If you look at the illustration it is quite
> clear, going from 3x100.000 iops to 500 with ceph. That should be a
> 'warning'.


In my case it's dropping from 5.000.000 to ~5.000/server.
In this case I could use SD-Cards for my ceph cluster. The bottleneck is
definitely the OSD-Software..
It's impressive that it's performing that bad.


Ceph overhead. If I am not mistaken, I was even seeing some advice on the
> mailing list that it realy does not make much sense going for all nvme
> because it hardly outperforms all ssd.


If it's performing THAT bad it does make sense in the current stage to use
ceph at all. Wondering how openstack can live with this bad performance.


I do not know if you can call it bad. I don't think coding the osd
> application is as trivial as it may seem. However there is a obvious
> difference between native performance. Good news is they are working on
> optimizing this osd code.
> You should approach this differently. You should investigate what your
> requirements are, and then see if ceph can meet those. SDS are always bad
> compared to native or raid performance. The nice stuff starts when a node
> fails and things just keep running etc.


I'm aware that there's plenty of overhead for SDS.
The current overhead can not be, especially because of the low rtt - which
is just a couple of ns, and the all in memory approach I'm currently
running against.

I was expecting far more from ceph's performance - especially because it's
used in a lot of projects.
The theory of sharding brings plenty of performance (even though to have
the full consistency) and I'm pretty sure it's just a matter of tweaking /
optimizing code to have a huge performance gain.

 Good news is they are working on optimizing this osd code.
>

I also saw that they are working on seastar.. and on top i saw benchmarks
from this performing as worse as bluestore.

Sadly exactly what i was expecting.
The only way to get this tuned is to invest plenty of more time into it..

On Sun, Jan 30, 2022 at 1:57 PM Marc <Marc@xxxxxxxxxxxxxxxxx> wrote:

> Hi Sascha,
>
> > Thanks for your response. Wrote this email early in the morning, spending
> > the whole night and the last two weeks on benchmarking ceph.
>
> Yes it is really bad this is not advertised, lots of people waste time on
> this.
>
> > Most blog entries, forum researchs and tutorialy complaining in this
> stage
> > now about
> > the poor hardware and latency which comes from the drives.
>
> Can't believe that. Hardware is hardware and latency/performance does not
> change.
>
> >
> > > Read and do research before you do anything?
> > https://yourcmc.ru/wiki/Ceph_performance
> >
> > This wiki was on my way as well. I went through all his optimisations.
> But
> > this wiki here has a huge Problem as well - in the end it compains mostly
> > about the disks. Nobody ever is looking at the software.
> >
>
> I think you misread this. If you look at the illustration it is quite
> clear, going from 3x100.000 iops to 500 with ceph. That should be a
> 'warning'.
>
> >
> > Could you point my why exactly the latency of single read/writes could be
> > that bad? Also why theres only overall 40k IO/s?
>
> Ceph overhead. If I am not mistaken, I was even seeing some advice on the
> mailing list that it realy does not make much sense going for all nvme
> because it hardly outperforms all ssd.
>
> > It must be very bad performing OSD software..
>
> I do not know if you can call it bad. I don't think coding the osd
> application is as trivial as it may seem. However there is a obvious
> difference between native performance. Good news is they are working on
> optimizing this osd code.
>
> You should approach this differently. You should investigate what your
> requirements are, and then see if ceph can meet those. SDS are always bad
> compared to native or raid performance. The nice stuff starts when a node
> fails and things just keep running etc.
>
>
>
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux