Re: increasing stability

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Completely agree as well. I'm very keen to see widespread adoption of Ceph, but battling against the major vendors is a massive challenge not helped by even a small amount of instability.

Douglas Youd
Direct  +61 8 9488 9571


-----Original Message-----
From: ceph-users-bounces@xxxxxxxxxxxxxx [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Chen, Xiaoxi
Sent: Thursday, 30 May 2013 1:40 AM
To: Wolfgang Hennerbichler
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  increasing stability

Cannot agree more,when I trying to promote ceph to internal state holder,they always complaining the stability of ceph,especially when they are evaluating ceph with high enough pressure, ceph cannot stay heathy during the test.



发自我的 iPhone

在 2013-5-29,19:13,"Wolfgang Hennerbichler" <wolfgang.hennerbichler@xxxxxxxxxxxxxxxx> 写道:

> Hi,
>
> as most on the list here I also see the future of storage in ceph. I
> think it is a great system and overall design, and sage with the rest
> of inktank and the community are doing their best to make ceph great.
> Being a part-time developer myself I know how awesome new features
> are, and how great it is to implement them.
> On the other hand I think cuttlefish is in a state where I am not
> feeling easy when saying: ceph is stable, go ahead, use it. I do
> happen to have to do a lot of presentations on ceph recently, and I'm
> doing a lot of lobbying for it.
> I also realize that it's not easy to develop a distributed system like
> ceph, and I know it needs time and a community to test. I'm just
> wondering if it might be better for the devs to keep their focus right
> now on fixing nasty bugs (even more as they do already), and make the
> mon's and osd's super-stable.
> I have no insight on the development cycles, so chances are you're
> doing this right now already. I'm just saying: I'd love to see ceph
> take over the storage world, and for that we need it in super stable states.
>
> Then ceph can succeed big time.
>
> Sorry for the noise, but I really wanted to get rid of this :)
> Wolfgang _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

________________________________

ZettaServe Disclaimer: This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately if you have received this email by mistake and delete this email from your system. Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. ZettaServe Pty Ltd accepts no liability for any damage caused by any virus transmitted by this email.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux