Re: CephFS read IO caching, where it is happining?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You may want to add this in your FIO recipe.

 * exec_prerun=echo 3 > /proc/sys/vm/drop_caches

Regards,

On Fri, Feb 3, 2017 at 12:36 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
>
>> Op 2 februari 2017 om 15:35 schreef Ahmed Khuraidah <abushihab@xxxxxxxxx>:
>>
>>
>> Hi all,
>>
>> I am still confused about my CephFS sandbox.
>>
>> When I am performing simple FIO test into single file with size of 3G I
>> have too many IOps:
>>
>> cephnode:~ # fio payloadrandread64k3G
>> test: (g=0): rw=randread, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio,
>> iodepth=2
>> fio-2.13
>> Starting 1 process
>> test: Laying out IO file(s) (1 file(s) / 3072MB)
>> Jobs: 1 (f=1): [r(1)] [100.0% done] [277.8MB/0KB/0KB /s] [4444/0/0 iops]
>> [eta 00m:00s]
>> test: (groupid=0, jobs=1): err= 0: pid=3714: Thu Feb  2 07:07:01 2017
>>   read : io=3072.0MB, bw=181101KB/s, iops=2829, runt= 17370msec
>>     slat (usec): min=4, max=386, avg=12.49, stdev= 6.90
>>     clat (usec): min=202, max=5673.5K, avg=690.81, stdev=361
>>
>>
>> But if I will change size to file to 320G, looks like I skip the cache:
>>
>> cephnode:~ # fio payloadrandread64k320G
>> test: (g=0): rw=randread, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio,
>> iodepth=2
>> fio-2.13
>> Starting 1 process
>> Jobs: 1 (f=1): [r(1)] [100.0% done] [4740KB/0KB/0KB /s] [74/0/0 iops] [eta
>> 00m:00s]
>> test: (groupid=0, jobs=1): err= 0: pid=3624: Thu Feb  2 06:51:09 2017
>>   read : io=3410.9MB, bw=11641KB/s, iops=181, runt=300033msec
>>     slat (usec): min=4, max=442, avg=14.43, stdev=10.07
>>     clat (usec): min=98, max=286265, avg=10976.32, stdev=14904.82
>>
>>
>> For random write test such behavior not exists, there are almost the same
>> results - around 100 IOps.
>>
>> So my question: could please somebody clarify where this caching likely
>> happens and how to manage it?
>>
>
> The page cache of your kernel. The kernel will cache the file in memory and perform read operations from there.
>
> Best way is to reboot your client between test runs. Although you can drop kernel caches I always reboot to make sure nothing is cached locally.
>
> Wido
>
>> P.S.
>> This is latest SLES/Jewel based onenode setup which has:
>> 1 MON, 1 MDS (both data and metadata pools on SATA drive) and 1 OSD (XFS on
>> SATA and journal on SSD).
>> My FIO config file:
>> direct=1
>> buffered=0
>> ioengine=libaio
>> iodepth=2
>> runtime=300
>>
>> Thanks
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux