Re: [Lsf-pc] [LSF/MM/BPF TOPIC] automating file system benchmarks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 13, 2019 at 07:12:03AM +0200, Amir Goldstein wrote:
> 
> Very nice :)
> You should post an [ANNOUNCE] every now and then.
> I rarely check upstream of xfstests-bld, because it just-works ;-)

Right now, the PTS support in gce-xfstests is very manual.  Right now
the VM is launched via "gce-xfstests pts", then you have to log into
the VM, "gce-xfstests ssh pts" after a few minutes, then run
"phoronix-test-suite pts/disk", answer a few questions, and then
afterwards run "pts-save --results" and then kill off the pts VM.

I want to get it to the point where "gce-xfstests pts" is sufficient,
where the benchmarks are run and the VM is automatically shut down
afterwards.  Also still to be done is to add support for kvm-xfstests.
That'll hopefully be done in the next month or so, as I have some free
time.

> I suppose you have access to a dedicated metal in the cloud for running
> your performance regression tests? Or at least a dedicated metal per execution.

I'm not currently using a dedicated VM currently.  I've been primarily
using a 1TB PD-SSD as the storage medium and a n1-standard-16 as the
VM type.  That's been fairly reliable.

Using GCE Local SSD is a little tricky because there is more than one
underlying hardware, and that can result in differing results across
different VM's.  What you *can* do is to just use the same VM, and
then kexec into different kernels each time.  This can be done
manually, by copying in a different kernel into /root/bzImage, and
then running /root/do_kexec, and then running the next benchmark.
Eventually my plan to support this with a  command like

gce-xfstests --kernel gs://$B/bzImage-4.19,gs://bz/$B/bzImage-5.3 \
	--local-ssd pts

The reason why Local SSD is interesting is that GCE's Persistent Disk
has a very different performance profile than HDD's or SSD's --- it
acts much more like a battery-backed enterprise storage array, in that
CACHE FLUSH's are super fast, as are random writes.  GCE Local SSD
acts like, well, a real high performance SSD, and it's good to
benchmark both.

> I have not looked into GCE, so don't know how easy it is and how expensive
> to use GCE this way.

A benchmark run does take longer than "gce-xfstests -g auto", since
you generally use a larger VM and a larger amount of storage.  A 1T
PD-SSD plus a n1-standard-16 VM is about a dollar an hour, and it's
3-4 hours to run the pts/disk benchmark suite.  So call it $3-4 for a
single performance test run.

> Is there any chance of Google donating this sort of resource for a performance
> regression test bot?

We're not at the point where we could run gce-xfstests (either for
functional or performance testing) as a bot.  There's still some
development work that needs to happen before that could be a reality.
For now, if there was a development team that wanted to use
gce-xfstests for performance and benchmarking, I'm happy to put them
in contact with the folks at Google which support open source
projects.

   	     		       	   		   - Ted



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux