Re: Ceph Performance very bad even in Memory?!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> 
> The benchmark was monitored by using this tool here:
> https://github.com/ceph/ceph/blob/master/src/tools/histogram_dump.py also
> by looking at the raw data of  "ceph daemon osd.7 perf dump".

Why are you testing with one osd? You do not need ceph if you are only having one disk.

You have to look at ceph differently. What kind of implementation are you going to use if you have 100 disks spread across 8 servers. 

> 
> *Result:*
> Either there is something really going wrong or ceph has huge bottlenecks
> inside the software which should be solved...
> 

Sorry for not looking at your results, I think I can imagine what they are.


> 
> What did I do wrong? What's going on here?!
> Please help me out!

Read and do research before you do anything? 
https://yourcmc.ru/wiki/Ceph_performance

But I have to admit performance should be much clearer advertised. I already suggested years ago on such a ceph day, to publish general test results so new users know what to expect.

So now you know what is 'bad', I will promise everything else you do with ceph (that is relevant) will put a smile on your face. Happy cephing!!! ;)





_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux