Re: Ceph Performance very bad even in Memory?!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sascha,

> Thanks for your response. Wrote this email early in the morning, spending
> the whole night and the last two weeks on benchmarking ceph.

Yes it is really bad this is not advertised, lots of people waste time on this.
 
> Most blog entries, forum researchs and tutorialy complaining in this stage
> now about
> the poor hardware and latency which comes from the drives.

Can't believe that. Hardware is hardware and latency/performance does not change.

> 
> > Read and do research before you do anything?
> https://yourcmc.ru/wiki/Ceph_performance
> 
> This wiki was on my way as well. I went through all his optimisations. But
> this wiki here has a huge Problem as well - in the end it compains mostly
> about the disks. Nobody ever is looking at the software.
> 

I think you misread this. If you look at the illustration it is quite clear, going from 3x100.000 iops to 500 with ceph. That should be a 'warning'.

> 
> Could you point my why exactly the latency of single read/writes could be
> that bad? Also why theres only overall 40k IO/s?

Ceph overhead. If I am not mistaken, I was even seeing some advice on the mailing list that it realy does not make much sense going for all nvme because it hardly outperforms all ssd.  

> It must be very bad performing OSD software..

I do not know if you can call it bad. I don't think coding the osd application is as trivial as it may seem. However there is a obvious difference between native performance. Good news is they are working on optimizing this osd code.

You should approach this differently. You should investigate what your requirements are, and then see if ceph can meet those. SDS are always bad compared to native or raid performance. The nice stuff starts when a node fails and things just keep running etc.



 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux