Drop in Iops with fsync when using NVMe as cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Kernel version: 4.4.0-62

Backing Devices: Segate Enterprise 7.2K rpm 2TB SAS (ST2000NX0433)

Cache Device: Intel DC P3700 NVMe 1.6TB

bcache cache mode: writeback

# make-bcache --block 4k  --bucket 2M  -B /dev/sdb  -C /dev/nvme0n1p2


Created backing and cache devices with above command. I was expecting
very high number of iops with and without fsync option of fio.


fio command without fsync:

# fio -filename=/dev/bcache0 -direct=1 -ioengine=libaio -rw=randwrite
-bs=4k -name=mytest -iodepth=1 -runtime=30 -time_based

iops : 35k

fio command with fsync:

fio -filename=/dev/bcache0 -direct=1 -ioengine=libaio -rw=randwrite
-bs=4k -name=mytest -iodepth=1 -runtime=30 -time_based -fsync=1

iops: 8.1k

Attempted following combinations and saw same results:

1. block size 512,4k bucket, 512k, 2M, 4M for bcache devices
2. fio -rw option of write also showed similar results.
3. bcache writeback_percent 10 or 50; sequential_cutoff: 64M ; read_ahead_kb: 4k
4. Captured blktrace for a single io and that didn't show anything interes

I'm quite surprised by the drop in iops with fsync turned on. Is this
expected or am I missing some basic setting?
Appreciate any help !.
Thanks,
Shiva
--
To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux ARM Kernel]     [Linux Filesystem Development]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux