Hi Udo, thanks for reply, I thought already that my message was missed the list.
Not sure if understand correctly. Do you mean "rbd cache = true"? If yes, then this is RBD client cache behavior, not on OSD side, isn't it?"
Regards
Ahmed
On Sun, Jan 22, 2017 at 6:45 PM, Udo Lembke <ulembke@xxxxxxxxxxxx> wrote:
Hi,
I don't use mds, but I thinks it's the same like with rdb - the readed
data are cached on the OSD-nodes.
The 4MB-chunks of the 3G-file fit completly in the cache, the other not.
Udo
> ______________________________
On 18.01.2017 07:50, Ahmed Khuraidah wrote:
> Hello community,
>
> I need your help to understand a little bit more about current MDS
> architecture.
> I have created one node CephFS deployment and tried to test it by fio.
> I have used two file sizes of 3G and 320G. My question is why I have
> around 1k+ IOps when perform random reading from 3G file into
> comparison to expected ~100 IOps from 320G. Could somebody clarify
> where is read buffer/caching performs here and how to control it?
>
> A little bit about setup - Ubuntu 14.04 server that consists Jewel
> based: one MON, one MDS (default parameters, except mds_log = false)
> and OSD using SATA drive (XFS) for placing data and SSD drive for
> journaling. No RAID controller and no pool tiering used
>
> Thanks
>
>
>
_________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com