Re: Drop in Iops with fsync when using NVMe as cache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 21, 2017 at 10:48:06AM -0600, shiva rkreddy wrote:

> fio command without fsync:
> 
> # fio -filename=/dev/bcache0 -direct=1 -ioengine=libaio -rw=randwrite
> -bs=4k -name=mytest -iodepth=1 -runtime=30 -time_based
> 
> iops : 35k
> 
> fio command with fsync:
> 
> fio -filename=/dev/bcache0 -direct=1 -ioengine=libaio -rw=randwrite
> -bs=4k -name=mytest -iodepth=1 -runtime=30 -time_based -fsync=1
> 
> iops: 8.1k

> I'm quite surprised by the drop in iops with fsync turned on. Is this
> expected or am I missing some basic setting?

It's not uncommon that fsync would have a huge performance impact.
Without fsync, most of the data never hits the storage and is only
staying in the system memory.

May I suggest that you try to measure the performance of the same tests
when the filesystem is created on the NVMe device directly, without
using bcache? You're likely to observe a similar pattern.

-- 
Vojtech Pavlik
Director SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux ARM Kernel]     [Linux Filesystem Development]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux