Performance testing

[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]



Hi,

I have began to work on some set of performance tests. I think it would
be useful to have some standard set, because as far as I know, there is
just little of performance testing and every of the few tests someone
does is unique. I want to propose my ideas before I start to really
write it, to fix possible complications.

Mixing performance with regressions tests wouldn't be a good idea, so I
thought about creating another category on the main level of tests
(something like xfstests/tests/performance). Or it would be better to
put it into entirely new directory, like xfstests/performance?

>From the beginning there would be some basic test cases, like sync/async
read and write. Hopefully more natural cases, like a database server
would be added later. For the IO testing, I want to use FIO for the
specific workflow and eventually iozone for the basic synthetic tests.

What I'm not sure is how a comparison between different versions could
be done, because I don't see any infrastructure within fstests for
cross-version comparison. (What would it do with regression tests
anyway...) So I wonder if it should be done in this set at all. So the
set would only print the measured values. Some other tool (which can be
also included, but is not directly part of the performance tests set)
could then be used to compare and/or plot graphs.

Comments and questions? :-)

Jan Tulak


--
To unsubscribe from this list: send the line "unsubscribe fstests" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystems Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux