Sorry..Enter pressed :) continued... no, it's not the only way to check, but it depends what you want to use ceph 2014-08-26 15:22 GMT+04:00 Irek Fasikhov <malmyzh at gmail.com>: > For me, the bottleneck is single-threaded operation. The recording will > have more or less solved with the inclusion of rbd cache, but there are > problems with reading. But I think that these problems can be solved cache > pool, but have not tested. > > It follows that the more threads, the greater the speed of reading and > writing. But in reality it is different. > > The speed and number of operations, depending on many factors, such as > network latency. > > Examples testing, special attention to the charts: > > > https://software.intel.com/en-us/blogs/2013/10/25/measure-ceph-rbd-performance-in-a-quantitative-way-part-i > and > > https://software.intel.com/en-us/blogs/2013/11/20/measure-ceph-rbd-performance-in-a-quantitative-way-part-ii > > > 2014-08-26 15:11 GMT+04:00 yuelongguang <fastsync at 163.com>: > > >> thanks Irek Fasikhov. >> is it the only way to test ceph-rbd? and an important aim of the test is >> to find where the bottleneck is. qemu/librbd/ceph. >> could you share your test result with me? >> >> >> >> thanks >> >> >> >> >> >> >> ? 2014-08-26 04:22:22?"Irek Fasikhov" <malmyzh at gmail.com> ??? >> >> Hi. >> I and many people use fio. >> For ceph rbd has a special engine: >> https://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html >> >> >> 2014-08-26 12:15 GMT+04:00 yuelongguang <fastsync at 163.com>: >> >>> hi,all >>> >>> i am planning to do a test on ceph, include performance, throughput, >>> scalability,availability. >>> in order to get a full test result, i hope you all can give me some >>> advice. meanwhile i can send the result to you,if you like. >>> as for each category test( performance, throughput, >>> scalability,availability) , do you have some some test idea and test >>> tools? >>> basicly i have know some tools to test throughtput,iops . but you can >>> tell the tools you prefer and the result you expect. >>> >>> thanks very much >>> >>> >>> >>> >>> _______________________________________________ >>> ceph-users mailing list >>> ceph-users at lists.ceph.com >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>> >>> >> >> >> -- >> ? ?????????, ??????? ???? ??????????? >> ???.: +79229045757 >> >> >> >> > > > -- > ? ?????????, ??????? ???? ??????????? > ???.: +79229045757 > -- ? ?????????, ??????? ???? ??????????? ???.: +79229045757 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140826/4b851e59/attachment.htm>