Meta

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/22/2013 09:28 AM, F. Ozbek wrote:
>
> However, it just turns out that we have the data and the tests, so we 
> will
> post it here. I have this feeling that the moment we do, Jeff will start

Please provide more information on the "data and the tests".  What are 
they, what do they entail, what is meant by failing, passing, etc.?

This information is helpful to everyone, regardless of which systems do 
poorly/well.

OTOH, please be prepared for a fairly intensive look at your testing 
methodology.  We've found in our own experience, that unless the tests 
really do what they are purported to do, that end users wind up 
generating less than valuable data, and subsequently, decisions based 
upon this are as often as not, fundamentally flawed.

I cannot tell you how many times we've dealt with flawed tests that 
didn't come close to measuring what people thought they did.   Its quite 
amusing to be attacked with results of these tests as well. Using poor 
tests and then bashing vendors with them is more of a reflection of the 
user than of the vendor.

Honestly, we have some issues with Gluster that we've raised off list 
with John Mark and others (not Jeff, but I should make the points with 
him as well).  There are reasonable and valid critiques of it, and it is 
not appropriate for all workloads.   There are good elements to it, and 
... less good ... elements to it, in implementation, design, etc.

I agree with Jeff that its bad form to come on the list and say "Gluster 
fails, X works" in general.  Its far more constructive to come on the 
list and say "these are the tests we use, and these are the results.  
Gluster does well here and here, X does well here and here."  Freedom of 
speech isn't relevant here, the mailing list and product are privately 
owned, and there is no presumption of such freedom in this case.  I'd 
urge you to respect the other list members and participants, by 
positively contributing as noted above.  The "gluster fails, X rulez" 
doesn't quite fit this.

So ... may I request that, before you respond to further posts on this 
topic, that you create a post with your tests, how you ran them, your 
hardware configs, your software stack elements (kernel, net/IB, ...), 
details of the tests, details of the results?  Without this, I am hard 
pressed to take further posts seriously.

There are alternatives to Gluster.  The ones we use/deploy include Ceph, 
Fraunhofer, Lustre, and others.  We did review MooseFS, mostly for a set 
of media customers.  It had some positive elements, but we found that 
performance was underwhelming for our streaming and reliability tests 
(c.f. http://download.scalableinformatics.com/disk_stress_tests/fio/ ).  
Our hardware are our JackRabbit units, and our siFlash units (links not 
provided so as to avoid spamming).  Native system performance was 
2.5GB/s for the JackRabbit, about 8GB/s for the siFlash.  GlusterFS got 
me to 2GB/s on JackRabbit, and 3.5GB/s on siFlash.  MooseFS, when we 
tested (about a year ago), was about 400-500 MB/s on JackRabbit, and 
about 600 MB/s on siFlash.  We tried some networked tests to multiple 
clients (and John Mark has an email from me around that time) where we 
were sustaining 2+ GB/s across 2x JackRabbit units with GlusterFS.  I've 
never been able to get above 700 MB/s with MooseFS on any of our test 
cases.  I've had tests fail on MooseFS, usually when a network port 
becomes overloaded, its response to this was anything but graceful.

We had considered using it with some customers, but figured we should 
wait for it to mature some more.   We feel the same way about btrfs, and 
until recently, about Ceph.  The two latter have been coming along 
nicely.  Ceph is deployable.

W.r.t. Gluster, it has been getting better, with a few caveats (again, 
John Mark knows what I am talking about).  Its not perfect for 
everything, but its quite good at what it does.

Regards,

Joe

-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics Inc.
email: landman at scalableinformatics.com
web  : http://scalableinformatics.com
        http://scalableinformatics.com/sicluster
phone: +1 734 786 8423 x121
fax  : +1 866 888 3112
cell : +1 734 612 4615



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux