答复: How's cephfs going?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, sir☺ 
You are really a lot of help☺

May I ask what kind of business are you using cephFS for? What's the io pattern:-)

If answering this may involve any business secret, I really understand if you don't answer:-)

Thanks again:-)

发件人: Brady Deetz [mailto:bdeetz@xxxxxxxxx] 
发送时间: 2017年7月18日 8:01
收件人: 许雪寒
抄送: ceph-users
主题: Re:  How's cephfs going?

I feel that the correct answer to this question is: it depends. 

I've been running a 1.75PB Jewel based cephfs cluster in production for about a 2 years at Laureate Institute for Brain Research. Before that we had a good 6-8 month planning and evaluation phase. I'm running with active/standby dedicated mds servers, 3x dedicated mons, and 12 osd nodes with 24 disks in each server. Every group of 12 disks have journals mapped to 1x Intel P3700. Each osd node has dual 40gbps ethernet lagged with lacp. In our evaluation we did find that the rumors are true. Your cpu choice will influence performance. 

Here's why my answer is "it depends." If you expect to get the same complete feature set as you do with isilon, scale-io, gluster, or other more established scaleout systems, it is not production ready. But, in terms of stability, it is. Over the course of the past 2 years I've triggered 1 mds bug that put my filesystem into read only mode. That bug was patched in 8 hours thanks to this community. Also that bug was trigger by a stupid mistake on my part that the application did not validate before the action was performed. 

If you have a couple of people with a strong background in Linux, networking, and architecture, I'd say Ceph may be a good fit for you. If not, maybe not. 

On Jul 16, 2017 9:59 PM, "许雪寒" <xuxuehan@xxxxxx> wrote:
Hi, everyone.
 
We intend to use cephfs of Jewel version, however, we don’t know its status. Is it production ready in Jewel? Does it still have lots of bugs? Is it a major effort of the current ceph development? And who are using cephfs now?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux