Dear all,
The IO500 is now accepting and encouraging submissions for the upcoming IO500 list revealed at Supercomputing 2018 in Dallas, Texas. We also announce the 10 compute node I/O challenge to encourage submission of small-scale results. The new ranked lists will be announced at our SC18 BOF on Wednesday, November 14th at 5:15pm. We hope to see you, and your results, there.
Deadline: 10 November 2018 AoE
The benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. Please submit and we look forward to seeing many of you at SC 2018! Please note that submissions of all size are welcome; the site has customizable sorting so it is possible to submit on a small system and still get a very good per-client score for example. Additionally, the list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data. More details below.
Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 was created in 2017 and published its first list at SC17. The need for such an initiative has long been known within High-Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and a spirited discussion finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking.
The multi-fold goals of the benchmark suite are as follows:
* Maximizing simplicity in running the benchmark suite
* Encouraging complexity in tuning for performance
* Allowing submitters to highlight their “hero run” performance numbers
* Forcing submitters to simultaneously report performance for challenging IO patterns.
Specifically, the benchmark suite includes a hero-run of both IOR and mdtest configured, however, possible to maximize performance and establish an upper-bound for performance. It also includes an IOR and mdtest run with highly prescribed parameters in an attempt to determine a lower-bound. Finally, it includes a namespace search as this has been determined to be a highly sought-after feature in HPC storage systems that have historically not been well measured. Submitters are encouraged to share their tuning insights for publication.
The goals of the community are also multi-fold:
* Gather historical data for the sake of analysis and to aid predictions of storage futures
* Collect tuning information to share valuable performance optimizations across the community
* Encourage vendors and designers to optimize for workloads beyond “hero runs”
* Establish bounded expectations for users, procurers, and administrators
10 Compute Node I/O Challenge
At SC, we will announce another IO-500 award for the "10 Compute Node I/O Challenge". This challenge is conducted using the regular IO-500 benchmark, however, with the rule that exactly 10 computes nodes must be used to run the benchmark (one exception is find, which may use 1 node). You may use any shared storage with, e.g., any number of servers. When submitting for the IO-500 list, you can opt-in for “Participate in the 10 compute node challenge only”, then we won't include the results into the ranked list. Other 10 compute node submission will be included in the full list and in the ranked list. We will announce the result in a separate derived list and in the full list but not on the ranked IO-500 list at io500.org.
Birds-of-a-feather
Once again, we encourage you to submit [1], to join our community, and to attend our BoF “The IO-500 and the Virtual Institute of I/O” at SC 2018 [2] where we will announce the second ever IO500 list. The current list includes results from BeeGPFS, DataWarp, IME, Lustre, and Spectrum Scale. We hope that the next list has even more.
We look forward to answering any questions or concerns you might have.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com