Re: 答复: How's cephfs going?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I work at Monash University. We are using active-standby MDS. We don't
yet have it in full production as we need some of the newer Luminous
features before we can roll it out more broadly, however we are moving
towards letting a subset of users on (just slowly ticking off related
work like putting external backup system in-place, writing some
janitor scripts to check quota enforcement, and so on). Our HPC folks
are quite keen for more as it has proved very useful for shunting a
bit of data around between disparate systems.

We're also testing NFS and CIFS gateways, after some initial issues
with the CTDB setup that part seems to be working, but now hitting
some interesting xattr acl behaviours (e.g. ACL'd dir writable through
one gateway node but not another), other colleagues looking at that
and poking Red Hat for assistance...

On 17 July 2017 at 13:27, 许雪寒 <xuxuehan@xxxxxx> wrote:
> Hi, thanks for the quick reply:-)
>
> May I ask which company are you in? I'm asking this because we are collecting cephfs's usage information as the basis of our judgement about whether to use cephfs. And also, how are you using it? Are you using single-mds, the so-called active-standby mode? And could you give some information of your cephfs's usage pattern, for example, does your client nodes directly mount cephfs or mount it through an NFS, or something like it, running a directory that is mounted with cephfs and are you using ceph-fuse?
>
> -----邮件原件-----
> 发件人: Blair Bethwaite [mailto:blair.bethwaite@xxxxxxxxx]
> 发送时间: 2017年7月17日 11:14
> 收件人: 许雪寒
> 抄送: ceph-users@xxxxxxxxxxxxxx
> 主题: Re:  How's cephfs going?
>
> It works and can reasonably be called "production ready". However in Jewel there are still some features (e.g. directory sharding, multi active MDS, and some security constraints) that may limit widespread usage. Also note that userspace client support in e.g. nfs-ganesha and samba is a mixed bag across distros and you may find yourself having to resort to re-exporting ceph-fuse or kernel mounts in order to provide those gateway services. We haven't tried Luminous CephFS yet as still waiting for the first full (non-RC) release to drop, but things seem very positive there...
>
> On 17 July 2017 at 12:59, 许雪寒 <xuxuehan@xxxxxx> wrote:
>> Hi, everyone.
>>
>>
>>
>> We intend to use cephfs of Jewel version, however, we don’t know its status.
>> Is it production ready in Jewel? Does it still have lots of bugs? Is
>> it a major effort of the current ceph development? And who are using cephfs now?
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
> Cheers,
> ~Blairo



-- 
Cheers,
~Blairo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux