Re: 答复: 答复: How's cephfs going?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20 July 2017 at 12:23, 许雪寒 <xuxuehan@xxxxxx> wrote:
> May I ask how many users do you have on cephfs? And how much data does the cephfs store?

https://www.redhat.com/en/resources/monash-university-improves-research-ceph-storage-case-study

As I said, we don't yet have CephFS in production, just finalising our
PoC setup to let some initial users have a go at it in the coming week
or two.

> interesting xattr acl behaviours (e.g. ACL'd dir writable through one gateway node but not another), other colleagues looking at that and poking Red Hat for assistance...

This turned out to be some something about having different versions
of the FUSE client on different SAMBA hosts - we hadn't finished
upgrading/restarting them since upgrading to 10.2.7 (RHCS 2.3).

Cheers,


> On 17 July 2017 at 13:27, 许雪寒 <xuxuehan@xxxxxx> wrote:
>> Hi, thanks for the quick reply:-)
>>
>> May I ask which company are you in? I'm asking this because we are collecting cephfs's usage information as the basis of our judgement about whether to use cephfs. And also, how are you using it? Are you using single-mds, the so-called active-standby mode? And could you give some information of your cephfs's usage pattern, for example, does your client nodes directly mount cephfs or mount it through an NFS, or something like it, running a directory that is mounted with cephfs and are you using ceph-fuse?
>>
>> -----邮件原件-----
>> 发件人: Blair Bethwaite [mailto:blair.bethwaite@xxxxxxxxx]
>> 发送时间: 2017年7月17日 11:14
>> 收件人: 许雪寒
>> 抄送: ceph-users@xxxxxxxxxxxxxx
>> 主题: Re:  How's cephfs going?
>>
>> It works and can reasonably be called "production ready". However in Jewel there are still some features (e.g. directory sharding, multi active MDS, and some security constraints) that may limit widespread usage. Also note that userspace client support in e.g. nfs-ganesha and samba is a mixed bag across distros and you may find yourself having to resort to re-exporting ceph-fuse or kernel mounts in order to provide those gateway services. We haven't tried Luminous CephFS yet as still waiting for the first full (non-RC) release to drop, but things seem very positive there...
>>
>> On 17 July 2017 at 12:59, 许雪寒 <xuxuehan@xxxxxx> wrote:
>>> Hi, everyone.
>>>
>>>
>>>
>>> We intend to use cephfs of Jewel version, however, we don’t know its status.
>>> Is it production ready in Jewel? Does it still have lots of bugs? Is
>>> it a major effort of the current ceph development? And who are using cephfs now?
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>>
>> --
>> Cheers,
>> ~Blairo
>
>
>
> --
> Cheers,
> ~Blairo



-- 
Cheers,
~Blairo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux