SSD IO performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/26/2015 08:53 AM, lixuehui555 at 126.com wrote:
>
> Hi ALL:

Hi!

>      I've built a ceph0.8 cluster including 2 nodes ,which  contains 5
> osds(ssd) each , with 100MB/s network . Testing a rbd device with
> default configuration ,the result is no ideal.To got better performance
> ,except the capability of random r/w  of  SSD, which should to give a
> change?
>      2 nodes  5 osds(SSD) *2  , 1 mon, 32GB RAM
>      100MB/S network
> and now the whole iops is just 500 . Should we change the filestore or
> journal part ? Thanks for any help!
> ------------------------------------------------------------------------
> lixuehui555 at 126.com


For writes, it's important that the SSD be able to process O_DSYNC 
writes quickly if it's being used for journal writes.  The best ones 
have power-loss-protection that allows them to ignore flush requests and 
continue writing without pause.  Be careful though because not all SSDs 
do this properly.

You can attempt to simulate how the OSD journal performs writes using 
fio.  Keep in mind that each write has a small amount of header 
information and is padded out to the nearest 4K boundary.  For 4K writes 
on SSDs for instance, you may be important.  Keeping that in mind, in 
the 4K case, something like this might be a good approximation:

fio --rw=write --ioengine=libaio --numjobs=1 --direct=1 --runtime=300
--bssplit=84k/20:88k/20:92k/20:96k/20:100k/20 --sync=1
--iodepth=9*$journals_per_device [ --name $device_name ]


Beyond that, We also spent some time looking at performance in the 
hammer development process on a couple of different SSDs:

http://nhm.ceph.com/Ceph_SSD_OSD_Performance.pdf
http://nhm.ceph.com/Ceph_Hammer_OSD_Shard_Tuning_PCIe.pdf

Those are just single-OSD tests.  Once the network is involved things 
can get more complicated.  Still, there's some useful data there.  Look 
in the appendices for the configuration used during testing.  The basic 
gist of it is that these things may help:

1) Disabling in-memory logging
2) Disabling authentication
3) Tweaking the OSD sharding (Likely most relevant for very fast SSDs)

>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux