Call For Submissions IO500 ISC21 List

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



https://io500.org/cfs
Stabilization Period: 05 - 14 May 2021 AoE
Submission Deadline: 11 June 2021 AoE

The IO500 is now accepting and encouraging submissions for the upcoming 8th IO500 list. Once again, we are also accepting submissions to the 10 Node Challenge to encourage the submission of small scale results. The new ranked lists will be announced via live-stream at a virtual session. We hope to see many new results.

What's New
Starting with ISC'21, the IO500 now follows a two-staged approach. First, there will be a two-week stabilization period during which we encourage the community to verify that the benchmark runs properly. During this period the benchmark will be updated based upon feedback from the community. The final benchmark will then be released on Monday, May 1st. We expect that runs compliant with the rules made during the stabilization period are valid as the final submission unless a significant defect is found. We are now creating a more detailed schema to describe the hardware and software of the system under test and provide the first set of tools to ease capturing of this information for inclusion with the submission. Further details will be released on the submission page.

Background
The benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. Please note that submissions of all sizes are welcome; the site has customizable sorting, so it is possible to submit on a small system and still get a very good per-client score, for example. Additionally, the list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data. More details below.

Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 was created in 2017, published its first list at SC17, and has grown exponentially since then. The need for such an initiative has long been known within High-Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and spirited discussion, finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking.

The multi-fold goals of the benchmark suite are as follows:

Maximizing simplicity in running the benchmark suite
Encouraging optimization and documentation of tuning parameters for performance
Allowing submitters to highlight their "hero run" performance numbers
Forcing submitters to simultaneously report performance for challenging IO patterns.

Specifically, the benchmark suite includes a hero-run of both IOR and mdtest configured however possible to maximize performance and establish an upper-bound for performance. It also includes an IOR and mdtest run with highly prescribed parameters in an attempt to determine a lower-bound. Finally, it includes a namespace search as this has been determined to be a highly sought-after feature in HPC storage systems that has historically not been well-measured. Submitters are encouraged to share their tuning insights for publication.

The goals of the community are also multi-fold:

Gather historical data for the sake of analysis and to aid predictions of storage futures Collect tuning information to share valuable performance optimizations across the community Encourage vendors and designers to optimize for workloads beyond "hero runs"
Establish bounded expectations for users, procurers, and administrators

10 Node I/O Challenge
The 10 Node Challenge is conducted using the regular IO500 benchmark, however, with the rule that exactly 10 client nodes must be used to run the benchmark. You may use any shared storage with, e.g., any number of servers. When submitting for the IO500 list, you can opt-in for "Participate in the 10 compute node challenge only", then we will not include the results into the ranked list. Other 10-node node submissions will be included in the full list and in the ranked list. We will announce the result in a separate derived list and in the full list but not on the ranked IO500 list at io500.org.

Birds-of-a-Feather
Once again, we encourage you to submit to join our community, and to attend our virtual BoF "The IO500 and the Virtual Institute of I/O" at ISC 2021, (time to be announced), where we will announce the new IO500 and 10 node challenge lists. The current list includes results from BeeGFS, CephFS, DAOS, DataWarp, GekkoFS, GFarm, IME, Lustre, MadFS, Qumulo, Spectrum Scale, Vast, WekaIO, and YRCloudFile. We hope that the upcoming list grows even more.


--
The IO500 Committee
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux