Re: Fedora/Infra/Gluster halp? Fwd: cloud disk benchmarks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 






On Mon, 8 Oct 2012, Jeff Darcy wrote:
http://skvidal.fedorapeople.org/misc/cloudbench.txt

Two things jump out at me from these results.  First is that using GlusterFS
replication for ephemeral storage seems . . . strange.  Is there some reason
that the OpenStack setup can't use local storage like the Eucalyptus one does?
Using remote storage for ephemeral is just always going to be sub-optimal no
matter how well that remote storage works.



Our primary reasons for using gluster here were two:
1. to use all the space available on all of the nodes - compute and storage/networking 2. to have a common disk backend for openstack to enable their live migration capability - so we can migrate instances off of compute node so we can put it in downtime.



The second thing is the ratios between Euca/iSCSI and OS/GlusterFS speeds.
Here are the numbers worked out:

	sequential read: 5.2x to 6.0x
	random read: 3.1x to 6.0x
	sequential write: 5.7x to 8.2x
	random write: 1.7x to 2.0x

This is a bit surprising, because I've always thought of random writes as one
of our worst cases.  Apparently it's also someone else's, though we still get
beaten.  It's also interesting that the worse numbers (for us) tend to be at
the higher thread counts, which is kind of contrary to our being all about
scalability rather than per-thread performance.  These results are grim.


They are kind of brutal but the conditions underwhich I was testing them was about the same between the two cloudlets.



I'm inclined to think that the read results - as bad as they might be - aren't
the problem here because reads can benefit from caching and locality of
reference.



 Seth, let me know if that's not true for your workload.  The real
problem is writes.  The report for random write seems inconsistent here, with
very low throughput but also very low latency.  I'll go with the throughput
numbers and say that for 4KB requests we're looking at ~160 IOPS.  Blech.  If
it were me, I'd stick with the Euca instances for this workload unless/until we
figure out why GlusterFS is performing so horribly.


Our goal is to use gluster to let us do live migrations. We've discussed the possibility of having separate cloudlets on purpose - so we can have use cases of disposable/fast instances and longer-term, reliable instances. And I think we're all on board with that, actually. There's just no harm in wanting cake and ice cream. Sometimes you might not get them both. :)

Thanks for the analysis, Jeff.


-sv

_______________________________________________
infrastructure mailing list
infrastructure@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/infrastructure



[Index of Archives]     [Fedora Development]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux