On Wed, Aug 01, 2012 at 09:06:44AM -0500, Mark Nelson wrote: > I haven't actually used bonnie++ myself, but I've read some rather > bad reports from various other people in the industry. Not sure how > much it's changed since then... > > https://blogs.oracle.com/roch/entry/decoding_bonnie > http://www.quora.com/What-are-some-file-system-benchmarks > http://scalability.org/?p=1685 > http://scalability.org/?p=1688 > > I'd say to just take extra care to make sure that that it's behaving > the way you intended it to (probably good advice no matter which > benchmark you use!) Thanks, for this good links :), I have started to try fio too for its flexibility. > >All results are good, my benchmark is clearly limited by my network > >connection ~ 110MB/s. > > Gigabit Ethernet is definitely going to be a limitation with large > block sequential IO for most modern disks. I'm concerned with your > 6 client numbers though. I assume those numbers are per client? > Even so, with 10 OSDs that performance is pretty bad! Are you > getting a good distribution of writes across all OSDs? Consistent > throughput over time on each? This is a network issue too, the 6 clients tests are not really representatives, all clients share the same 1 gigabit link, I will acquire more hardwares to be more realistic soon (and replace these results). Some precisions have been added to the benchmark page. > >In exception of the rest-api bench, the value seems really low. > > ... > >Is my rest-bench result normal ? Have I missed something ? > > You may want to try increasing the number of concurrent rest-bench > operations. Also I'd explicitly specify the number of PGs for the > pool you create to make sure that you are getting a good > distribution. During my test the number of PGs is 640 for 10 OSDs, I have tried with more concurrent operation 32 and 64, but the result is almost the same with more latency. Cheers, -- Mehdi Abaakouk for eNovance mail: sileht@xxxxxxxxxx irc: sileht
Attachment:
signature.asc
Description: Digital signature