Re: Ceph Deployments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Date: Mon, 19 Aug 2013 10:50:25 +0200
> From: Wolfgang Hennerbichler <wolfgang.hennerbichler@xxxxxxxxxxxxxxxx>
> To: <ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  Ceph Deployments
> Message-ID: <5211DC51.4070001@xxxxxxxxxxxxxxxx>
> Content-Type: text/plain; charset="ISO-8859-1"
>
> On 08/19/2013 10:36 AM, Schmitt, Christian wrote:
> > Hello, I just have some small questions about Ceph Deployment models and
> > if this would work for us.
> > Currently the first question would be, is it possible to have a ceph
> > single node setup, where everything is on one node?
>
> yes. depends on 'everything', but it's possible (though not recommended)
> to run mon, mds, and osd's on the same host, and even do virtualisation.

Currently we don't want to virtualise on this machine since the
machine is really small, as said we focus on small to midsize
businesses. Most of the time they even need a tower server due to the
lack of a correct rack. ;/

> > Our Application, Ceph's object storage and a database?
>
> what is 'a database'?

We run Postgresql or MariaDB (without/with Galera depending on the cluster size)

> > We focus on this
> > deployment model for our very small customers, who only have like 20
> > members that use our application, so the load wouldn't be very high.
> > And the next question would be, is it possible to extend the Ceph single
> > node to 3 nodes later, if they need more availability?
>
> yes.

Thats good!

> > Also we always want to use Shared Nothing Machines, so every service
> > would be on one machine, is this Okai for Ceph, or does Ceph really need
> > a lot of CPU/Memory/Disk Speed?
>
> ceph needs cpu / disk speed when disks fail and need to be recovered. it
> also uses some cpu when you have a lot of i/o, but generally it is
> rather lightweight.
> shared nothing is possible with ceph, but in the end this really depends
> on your application.

hm, when disk fails we already doing some backup on a dell powervault
rd1000, so i don't think thats a problem and also we would run ceph on
a Dell PERC Raid Controller with RAID1 enabled on the data disk.

> > Currently we make an archiving software for small customers and we want
> > to move things on the file system on a object storage.
>
> you mean from the filesystem to an object storage?

yes, currently everything is on the filesystem and this is really
horrible, thousands of pdfs just on the filesystem. we can't scale up
that easily with this setup.
Currently we run on Microsoft Servers, but we plan to rewrite our
whole codebase with scaling in mind, from 1 to X Servers. So 1, 3, 5,
7, 9, ... X²-1 should be possible.

> > Currently we only
> > have customers that needs 1 machine or 3 machines. But everything should
> > work as fine on more.
>
> it would with ceph. probably :)

That's nice to hear. I was really scared that we don't find a solution
that can run on 1 system and scale up to even more. We first looked at
HDFS but this isn't lightweight. And the overhead of Metadata etc.
just isn't that cool.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux