Re: CephFS Advice

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 22, 2016 at 2:37 PM, Ben Archuleta <barchu02@xxxxxxx> wrote:
> Hello All,
>
> I have experience using Lustre but I am new to the Ceph world, I have some questions to the Ceph users out there.
>
> I am thinking about deploying a Ceph storage cluster that lives in multiple location "Building A" and "Building B”, this cluster will be comprised of two dell servers with 10TB (5 * 2TB Disks) of JBOD storage and a MDS server over a 10GB network. We will be using CephFS to serve multiple operating systems (Windows, Linux, OS X).

A two node Ceph cluster is rarely wise.  If one of your servers goes
down, you're going to be down to a single copy of the data (unless
you've got a whopping 4 replicas to begin with), and so you'd be ill
advised to write anything to the cluster while it's in a degraded
state.  If you've only got one MDS server, your system is going to
have a single point of failure anyway.

You should probably look again at what levels of resilience and
availability you're trying to achieve here and think about whether
what you really want might be two NFS servers backing up to each
other.

> My main question is how well does CephFS work in a multi-operating system environment and how well does it support NFS/CIFS?

Exporting CephFS over NFS works (either kernel NFS or nfs-ganesha),
beyond that CephFS doesn't care too much.  The Samba integration is
less advanced and less tested.  Bug reports are welcome if you try it
out.

> What are the chances of data corruption.

There's no simple answer to a question like that.  It's highly
unlikely to eat your data on a properly configured cluster.

> Also on average how well does CephFS handle variable size files ranging from really small to really large?

Large files just get striped into smaller objects (4MB by default).
Small files have a higher metadata overhead per data, as in any
system.

Cheers,
John
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux