Re: cephfs survey results

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Dec 6, 2014 at 10:40 AM, Lorieri <lorieri@xxxxxxxxx> wrote:
> Hi,
>
>
> if I have a situation when each node in a cluster writes their own
> files in cephfs, is it safe to use multiple MDS ?
> I mean, is the problem using multiple MDS related to nodes writing same files ?
>

It's not a problem. each file has an authority MDS.

Yan, Zheng

> thanks,
>
> -lorieri
>
>
>
> On Tue, Nov 4, 2014 at 9:47 PM, Shain Miley <smiley@xxxxxxx> wrote:
>> +1 for fsck and snapshots, being able to have snapshot backups and protect
>> against accidental deletion, etc is something we are really looking forward
>> to.
>>
>> Thanks,
>>
>> Shain
>>
>>
>>
>> On 11/04/2014 04:02 AM, Sage Weil wrote:
>>>
>>> On Tue, 4 Nov 2014, Blair Bethwaite wrote:
>>>>
>>>> On 4 November 2014 01:50, Sage Weil <sage@xxxxxxxxxxxx> wrote:
>>>>>
>>>>> In the Ceph session at the OpenStack summit someone asked what the
>>>>> CephFS
>>>>> survey results looked like.
>>>>
>>>> Thanks Sage, that was me!
>>>>
>>>>>   Here's the link:
>>>>>
>>>>>          https://www.surveymonkey.com/results/SM-L5JV7WXL/
>>>>>
>>>>> In short, people want
>>>>>
>>>>> fsck
>>>>> multimds
>>>>> snapshots
>>>>> quotas
>>>>
>>>> TBH I'm a bit surprised by a couple of these and hope maybe you guys
>>>> will apply a certain amount of filtering on this...
>>>>
>>>> fsck and quotas were there for me, but multimds and snapshots are what
>>>> I'd consider "icing" features - they're nice to have but not on the
>>>> critical path to using cephfs instead of e.g. nfs in a production
>>>> setting. I'd have thought stuff like small file performance and
>>>> gateway support was much more relevant to uptake and
>>>> positive/pain-free UX. Interested to hear others rationale here.
>>>
>>> Yeah, I agree, and am taking the results with a grain of salt.  I
>>> think the results are heavily influenced by the order they were
>>> originally listed (I whish surveymonkey would randomize is for each
>>> person or something).
>>>
>>> fsck is a clear #1.  Everybody wants multimds, but I think very few
>>> actually need it at this point.  We'll be merging a soft quota patch
>>> shortly, and things like performance (adding the inline data support to
>>> the kernel client, for instance) will probably compete with getting
>>> snapshots working (as part of a larger subvolume infrastructure).  That's
>>> my guess at least; for now, we're really focused on fsck and hard
>>> usability edges and haven't set priorities beyond that.
>>>
>>> We're definitely interested in hearing feedback on this strategy, and on
>>> peoples' experiences with giant so far...
>>>
>>> sage
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>> --
>> Shain Miley | Manager of Systems and Infrastructure, Digital Media |
>> smiley@xxxxxxx | 202.513.3649
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux