答复: How's cephfs going?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, thanks for the advice:-)

By the way, may I ask what kind of business you are using cephFS for? What's the IO pattern of that business? And which version of ceph are you using? If this involves any business secret, it's really understandable not to answer:-)

Thanks again for the help:-)

-----邮件原件-----
发件人: Deepak Naidu [mailto:dnaidu@xxxxxxxxxx] 
发送时间: 2017年7月18日 6:59
收件人: Blair Bethwaite; 许雪寒
抄送: ceph-users@xxxxxxxxxxxxxx
主题: RE:  How's cephfs going?

Based on my experience, it's really stable and yes is production ready. Most of the use case for cephFS depends on what your trying to achieve. Few feedbacks.

1) Kernel client is nice/stable and can achieve higher bandwidth if you have 40G or higher network.
2) ceph-fuse is very slow, as the writes are cached on your client RAM, regardless of direct IO.
3) Look out for blue store for long term. It stands true for CEPH not in particular to ceph FS only.
4) If you want per folder based namespace(in lack of words) you need to ensure your running latest kernel or backport the fixes to your running kernel.
5) Higher IO blocks will provide faster throughput. It would not be great of smaller IO blocks.
6) Use SSD for ceph FS Metadata pool(it really helps), this is based on my experience, folks can debate. I guess ebay has some writeup where they didn’t see any advantage on using SSD.
7) Lookup below experimental features
http://docs.ceph.com/docs/master/cephfs/experimental-features/?highlight=experimental

--
Deepak


-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Blair Bethwaite
Sent: Sunday, July 16, 2017 8:14 PM
To: 许雪寒
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  How's cephfs going?

It works and can reasonably be called "production ready". However in Jewel there are still some features (e.g. directory sharding, multi active MDS, and some security constraints) that may limit widespread usage. Also note that userspace client support in e.g. nfs-ganesha and samba is a mixed bag across distros and you may find yourself having to resort to re-exporting ceph-fuse or kernel mounts in order to provide those gateway services. We haven't tried Luminous CephFS yet as still waiting for the first full (non-RC) release to drop, but things seem very positive there...

On 17 July 2017 at 12:59, 许雪寒 <xuxuehan@xxxxxx> wrote:
> Hi, everyone.
>
>
>
> We intend to use cephfs of Jewel version, however, we don’t know its status.
> Is it production ready in Jewel? Does it still have lots of bugs? Is 
> it a major effort of the current ceph development? And who are using cephfs now?
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



--
Cheers,
~Blairo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain confidential information.  Any unauthorized review, use, disclosure or distribution is prohibited.  If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux