Re: Testing a node by fio - strange results to me

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ahmed,

no, I mean the normal linux cache on the OSD-nodes.

If an file was read, they stay in the cache (how long depends on memory
and activity). The next reading will be very fast.

But you can have further caching (IMHO the client caches with cephfs too).


Udo


On 23.01.2017 07:07, Ahmed Khuraidah wrote:
> Hi Udo, thanks for reply, I thought already that my message was missed
> the list. 
> Not sure if understand correctly. Do you mean "rbd cache = true"? If
> yes, then this is RBD client cache behavior, not on OSD side, isn't it?"
>
>
> Regards
> Ahmed
>
>
> On Sun, Jan 22, 2017 at 6:45 PM, Udo Lembke <ulembke@xxxxxxxxxxxx
> <mailto:ulembke@xxxxxxxxxxxx>> wrote:
>
>     Hi,
>
>     I don't use mds, but I thinks it's the same like with rdb - the readed
>     data are cached on the OSD-nodes.
>
>     The 4MB-chunks of the 3G-file fit completly in the cache, the
>     other not.
>
>
>     Udo
>
>
>     On 18.01.2017 07:50, Ahmed Khuraidah wrote:
>     > Hello community,
>     >
>     > I need your help to understand a little bit more about current MDS
>     > architecture.
>     > I have created one node CephFS deployment and tried to test it
>     by fio.
>     > I have used two file sizes of 3G and 320G. My question is why I have
>     > around 1k+ IOps when perform random reading from 3G file into
>     > comparison to expected ~100 IOps from 320G. Could somebody clarify
>     > where is read buffer/caching performs here and how to control it?
>     >
>     > A little bit about setup - Ubuntu 14.04 server that consists Jewel
>     > based: one MON, one MDS (default parameters, except mds_log = false)
>     > and OSD using SATA drive (XFS) for placing data and SSD drive for
>     > journaling. No RAID controller and no pool tiering used
>     >
>     > Thanks
>     >
>     >
>     >
>     > _______________________________________________
>     > ceph-users mailing list
>     > ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>     <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux