Testing a node by fio - strange results to me

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello community,

I need your help to understand a little bit more about current MDS architecture. 
I have created one node CephFS deployment and tried to test it by fio. I have used two file sizes of 3G and 320G. My question is why I have around 1k+ IOps when perform random reading from 3G file into comparison to expected ~100 IOps from 320G. Could somebody clarify where is read buffer/caching performs here and how to control it?

A little bit about setup - Ubuntu 14.04 server that consists Jewel based: one MON, one MDS (default parameters, except mds_log = false) and OSD using SATA drive (XFS) for placing data and SSD drive for journaling. No RAID controller and no pool tiering used

Thanks
 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux