Re: Ceph Deployments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2013/8/19 Wolfgang Hennerbichler <wolfgang.hennerbichler@xxxxxxxxxxxxxxxx>:
> On 08/19/2013 12:01 PM, Schmitt, Christian wrote:
>>> yes. depends on 'everything', but it's possible (though not recommended)
>>> to run mon, mds, and osd's on the same host, and even do virtualisation.
>>
>> Currently we don't want to virtualise on this machine since the
>> machine is really small, as said we focus on small to midsize
>> businesses. Most of the time they even need a tower server due to the
>> lack of a correct rack. ;/
>
> whoa :)

Yep that's awful.

>>>> Our Application, Ceph's object storage and a database?
>>>
>>> what is 'a database'?
>>
>> We run Postgresql or MariaDB (without/with Galera depending on the cluster size)
>
> You wouldn't want to put the data of postgres or mariadb on cephfs. I
> would run the native versions directly on the servers and use
> mysql-multi-master circular replication. I don't know about similar
> features of postgres.

No i don't want to put a MariaDB Cluster on CephFS we want to put PDFs
in CephFS or Ceph's Object Storage and hold a key or path in the
database, also other things like user management will belong to the
database

>>> shared nothing is possible with ceph, but in the end this really depends
>>> on your application.
>>
>> hm, when disk fails we already doing some backup on a dell powervault
>> rd1000, so i don't think thats a problem and also we would run ceph on
>> a Dell PERC Raid Controller with RAID1 enabled on the data disk.
>
> this is open to discussion, and really depends on your use case.

Yeah we definitely know that it isn't good to use Ceph on a single
node, but i think it's easier to design the application that it will
depends on ceph. it wouldn't be easy to manage to have a single node
without ceph and more than 1 node with ceph.

>>>> Currently we make an archiving software for small customers and we want
>>>> to move things on the file system on a object storage.
>>>
>>> you mean from the filesystem to an object storage?
>>
>> yes, currently everything is on the filesystem and this is really
>> horrible, thousands of pdfs just on the filesystem. we can't scale up
>> that easily with this setup.
>
> Got it.
>
>> Currently we run on Microsoft Servers, but we plan to rewrite our
>> whole codebase with scaling in mind, from 1 to X Servers. So 1, 3, 5,
>> 7, 9, ... X²-1 should be possible.
>
> cool.
>
>>>> Currently we only
>>>> have customers that needs 1 machine or 3 machines. But everything should
>>>> work as fine on more.
>>>
>>> it would with ceph. probably :)
>>
>> That's nice to hear. I was really scared that we don't find a solution
>> that can run on 1 system and scale up to even more. We first looked at
>> HDFS but this isn't lightweight.
>
> not only that, HDFS also has a single point of failure.
>
>> And the overhead of Metadata etc.
>> just isn't that cool.
>
> :)

Yeah that's why I came to Ceph. I think that's probably the way we want to go.
Really thank you for your help. It's good to know that I have a
solution for the things that are badly designed on our current
solution.

> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux