Re: rados_read versus rados_aio_read performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2017-10-01 16:47, Alexander Kushnirenko wrote:

Hi, Gregory!
 
Thanks for the comment.  I compiled simple program to play with write speed measurements (from librados examples). Underline "write" functions are:
rados_write(io, "hw", read_res, 1048576, i*1048576);
rados_aio_write(io, "foo", comp, read_res, 1048576, i*1048576);
 
So I consecutively put 1MB blocks on CEPH.   What I measured is that rados_aio_write gives me about 5 times the speed of rados_write.  I make 128 consecutive writes in for loop to create object of maximum allowed size of 132MB.
 
Now if I do consecutive write from some client into CEPH storage, then what is the recommended buffer size? (I'm trying to debug very poor Bareos write speed of just 3MB/s to CEPH)
 
Thank you,
Alexander

On Fri, Sep 29, 2017 at 5:18 PM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
It sounds like you are doing synchronous reads of small objects here. In that case you are dominated by the per-op already rather than the throughout of your cluster. Using aio or multiple threads will let you parallelism requests.
-Greg
On Fri, Sep 29, 2017 at 3:33 AM Alexander Kushnirenko <kushnirenko@xxxxxxxxx> wrote:
Hello,
 
We see very poor performance when reading/writing rados objects.  The speed is only 3-4MB/sec, compared to 95MB rados benchmarking.
 
When you look on underline code it uses librados and linradosstripper libraries (both have poor performance) and the code uses rados_read and rados_write functions.  If you look on examples they recommend rados_aio_read/write.  
 
Could this be the reason for poor performance?
 
Thank you,
Alexander.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

Even the 95MB/s rados benchmark may still be indicative of a problem, it defaults to creating 16 (or maybe 32) threads so it can be writing to 16 different OSDs simultaneously.  To get a more accurate value to what you are doing try the rados bench with 1 thread and 1M block size (default it 4M) such as 

rados bench -p testpool -b 1048576 30 write -t 1 --no-cleanup

 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux