Re: Strange performance drop and low oss performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 
I didn't have such drop in performance testing 'rados bench 360 write -p 
rbd' on 3x replicated (slow)hdd pool. Sort of near the average, 
sometimes drops to 90. But I guess the test hits than an osd that is 
scrubbing and being used by other processes. 


-----Original Message-----
Sent: 05 February 2020 16:34
To: quexian da
Cc: ceph-users
Subject:  Re: Strange performance drop and low oss 
performance

Den ons 5 feb. 2020 kl 16:19 skrev quexian da <daquexian566@xxxxxxxxx>:

> Thanks for your valuable answer!
> Is the write cache specific to ceph? Could you please provide some 
> links to the documentation about the write cache? Thanks!
>
>
It is all the possible caches used by ceph, by the device driver, the 
filesystem (in filestore+xfs), the controllers (emulated or real) and 
the harddisk electronics, ie anything between the benchmark software and 
the spinning disk write head (or not so spinning on ssds).


> Do you have any idea about the slow oss speed? Is it normal that the 
> write performance of object gateway is slower than that of rados 
> cluster? Thanks in advance!
>
>
Object gateway (be it swift or S3) goes over something that looks like 
http, so it will most certainly have longer turn around times and hence 
slower speed for single streams.
You may possibly get over parts of that overhead by having many multiple 
streams and counting the sum of the transfers, but there is no big 
surprise that individual writes get slower if you have to pass via an 
external box (the radosgw) using https instead of writing directly to 
the storage.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux