Re: Open source SPC-1 Workload IO Pattern

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Michael,
I noticed the code on the fio branch (that is where I grabbed the spc1.[hc] files :-) ). Do you know why that branch has not being merged to master?

- Luis

On 11/18/2014 11:56 PM, Michael O'Sullivan wrote:
Hi Justin & Luis,

We did a branch of fio that implemented this SPC-1 trace a few years ago. I can dig up the code and paper we wrote if it is useful?

Cheers, Mike

On 19/11/2014, at 4:21 pm, "Justin Clift" <justin@xxxxxxxxxxx> wrote:

Nifty. :)

(Yeah, catching up on old unread email, as the wifi in this hotel is so
bad I can barely do anything else.  8-10 second ping times to
www.gluster.org. :/)

As a thought, would there be useful analysis/visualisation capabilities
if you stored the data into a time series database (eg InfluxDB) then
used Grafana (http://grafana.org) on it?

+ Justin


On Fri, 07 Nov 2014 12:01:56 +0100
Luis Pabón <lpabon@xxxxxxxxxx> wrote:

Hi guys,
I created a simple test program to visualize the I/O pattern of
NetApp’s open source spc-1 workload generator. SPC-1 is an enterprise
OLTP type workload created by the Storage Performance Council
(http://www.storageperformance.org/results).  Some of the results are
published and available here:
http://www.storageperformance.org/results/benchmark_results_spc1_active .

NetApp created an open source version of this workload and described
it in their publication "A portable, open-source implementation of
the SPC-1 workload" (
http://www3.lrgl.uqam.ca/csdl/proceedings/iiswc/2005/9461/00/01526014.pdf
)

The code is available onGithub: https://github.com/lpabon/spc1 .  All
it does at the moment is capture the pattern, no real IO is
generated. I will be working on a command line program to enable
usage on real block storage systems.  I may either extend fio or
create a tool specifically tailored to the requirements needed to run
this workload.

On github, I have an example IO pattern for a simulation running 50
mil IOs using HRRW_V2. The simulation ran with an ASU1 (Data Store)
size of 45GB, ASU2 (User Store) size of 45GB, and ASU3 (Log) size of
10GB.

- Luis

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


--
GlusterFS - http://www.gluster.org

An open source, distributed file system scaling to several
petabytes, and handling thousands of clients.

My personal twitter: twitter.com/realjustinclift
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel





[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux