Re: Status of CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have implemented a 171TB CephFS using Infernalis recently (it is set so I can grow that to just under 2PB).
I tried using Jewel, but it had grief, so I will wait on that.

I am migrating data from a lustre filesystem and so far it seems ok. I have not put it into production yet, but will be testing/playing with it one I have the 50TB of data copied over to play with.
That migration is slower than I prefer, but so far so good.

Brian Andrus

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Vincenzo Pii
Sent: Wednesday, April 13, 2016 2:06 AM
To: Christian Balzer <chibi@xxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Status of CephFS


> On 13 Apr 2016, at 10:55, Christian Balzer <chibi@xxxxxxx> wrote:
> 
> On Wed, 13 Apr 2016 11:51:08 +0300 Oleksandr Natalenko wrote:
> 
>> 13.04.2016 11:31, Vincenzo Pii wrote:
>>> The setup would include five nodes, two monitors and three OSDs, so 
>>> data would be redundant (we would add the MDS for CephFS, of course).
>> 
>> You need uneven number of mons. In your case I would setup mons on 
>> all 5 nodes, or at least on 3 of them.
>> 
> What Oleksandr said.
> And in your case MONs can be easily co-located with MDS unless they're 
> hopelessly underpowered.
> 
> See the VERY recent thread "Thoughts on proposed hardware configuration"
> for some more thoughts.
> 
> As for CephFS, I think fsck is upcoming in Jewel, but don't quote me 
> on that, use google and the Ceph Release page.
> 
> Christian
> -- 
> Christian Balzer        Network/Systems Engineer                
> chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
> http://www.gol.com/

Hi Guys,

Thanks for the tips, I checked the topic that you mentioned, but at the moment I would really need to understand the implications of using CephFS today (Infernalis) and what can go wrong.

Any direct experience with CephFS?

Thanks for the help!

Vincenzo.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux