Re: [PATCH] sysctl: Add a feature to drop caches selectively

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/27/2014 04:55 AM, Dave Chinner wrote:
On Thu, Jun 26, 2014 at 02:10:28PM +0200, Bernd Schubert wrote:
On 06/26/2014 01:57 PM, Lukáš Czerner wrote:
On Thu, 26 Jun 2014, Artem Bityutskiy wrote:
On Thu, 2014-06-26 at 12:36 +0200, Bernd Schubert wrote:
On 06/26/2014 08:13 AM, Artem Bityutskiy wrote:
On Thu, 2014-06-26 at 11:06 +1000, Dave Chinner wrote:
Your particular use case can be handled by directing your benchmark
at a filesystem mount point and unmounting the filesystem in between
benchmark runs. There is no ned to adding kernel functionality for
somethign that can be so easily acheived by other means, especially
in benchmark environments where *everything* is tightly controlled.

If I was a benchmark writer, I would not be willing running it as root
to be able to mount/unmount, I would not be willing to require the
customer creating special dedicated partitions for the benchmark,
because this is too user-unfriendly. Or do I make incorrect assumptions?

But why a sysctl then? And also don't see a point for that at all, why
can't the benchmark use posix_fadvise(POSIX_FADV_DONTNEED)?

The latter question was answered - people want a way to drop caches for
a file. They need a method which guarantees that the caches are dropped.
They do not need an advisory method which does not give any guarantees.

I'm not sure if a benchmark really needs that so much that
FADV_DONTNEED isn't sufficient.
Personally I would also like to know if FADV_DONTNEED succeeded.
I.e. 'ql-fstest' is to check if the written pattern went to the
block device and currently it does not know if data really had been
dropped from the page cache. As it reads files several times this is
not critical, but only would be a nice to have - nothing worth to
add a new syscall.

ql-test is not a benchmark, it's a data integrity test. The re-read
verification problem is easily solved by using direct IO to read the
files directly without going through the page cache. Indeed, direct
IO will invalidate cached pages over the range it reads before it
does the read, so the guarantee that you are after - no cached pages
when the read is done - is also fulfilled by the direct IO read...

I really don't understand why people keep trying to make cached IO
behave like uncached IO when we already have uncached IO
interfaces....


Firstly, direct IO has an entirely different IO pattern, usually much simpler than buffered through the page cache. Secondly, going through the page cache ensures that page cache buffering is also tested. I'm not at all opposed to open files randomly with direct IO to also test that path and I'm going to add that soon, but only using direct IO would limit the use case of ql-fstest.


Bernd

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux