Re: Ceph Deployments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Wolfgang is correct. You do not need VMs at all if you are setting up
Ceph Object Storage. It's just Apache, FastCGI, and the radosgw daemon
interacting with the Ceph Storage Cluster. You can do that on one box
no problem. It's still better to have more drives for performance
though.

On Mon, Aug 19, 2013 at 12:08 PM, Wolfgang Hennerbichler
<wolfgang.hennerbichler@xxxxxxxxxxxxxxxx> wrote:
> What you are trying to do will work, because you will not need any kernel related code for object storage, so a one node setup will work for you.
>
> --
> Sent from my mobile device
>
> On 19.08.2013, at 20:29, "Schmitt, Christian" <c.schmitt@xxxxxxxxxxxxxx> wrote:
>
>> That sounds bad for me.
>> As said one of the things we consider is a one node setup, for production.
>> Not every Customer will afford hardware worth more than ~4000 Euro.
>> Small business users don't need need the biggest hardware, but i don't
>> think it's a good way to have a version who uses the filesystem and
>> one version who use ceph.
>>
>> We prefer a Object Storage for our Files. It should work like the
>> Object Storage of the App Engine.
>> That scales from 1 to X Servers.
>>
>>
>> 2013/8/19 John Wilkins <john.wilkins@xxxxxxxxxxx>:
>>> Actually, I wrote the Quick Start guides so that you could do exactly
>>> what you are trying to do, but mostly from a "kick the tires"
>>> perspective so that people can learn to use Ceph without imposing
>>> $100k worth of hardware as a requirement. See
>>> http://ceph.com/docs/master/start/quick-ceph-deploy/
>>>
>>> I even added a section so that you could do it on one disk--e.g., on
>>> your laptop.  http://ceph.com/docs/master/start/quick-ceph-deploy/#multiple-osds-on-the-os-disk-demo-only
>>>
>>> It says "demo only," because you won't get great performance out of a
>>> single node. Monitors, OSDs, and Journals writing to disk and fsync
>>> issues would make performance sub-optimal.
>>>
>>> For better performance, you should consider a separate drive for each
>>> Ceph OSD Daemon if you can, and potentially a separate SSD drive
>>> partitioned for journals. If you can separate the OS and monitor
>>> drives from the OSD drives, that's better too.
>>>
>>> I wrote it as a two-node quick start, because you cannot kernel mount
>>> the Ceph Filesystem or Ceph Block Devices on the same host as the Ceph
>>> Storage Cluster. It's a kernel issue, not a Ceph issue. However, you
>>> can get around this too. If your machine has enough RAM and CPU, you
>>> can also install virtual machines and kernel mount cephfs and block
>>> devices in the virtual machines with no kernel issues. You don't need
>>> to use VMs at all for librbd. So you can install QEMU/KVM, libvirt and
>>> OpenStack all on the same host too.  It's just not an ideal situation
>>> from performance or high availability perspective.
>>>
>>>
>>>
>>> On Mon, Aug 19, 2013 at 3:12 AM, Schmitt, Christian
>>> <c.schmitt@xxxxxxxxxxxxxx> wrote:
>>>> 2013/8/19 Wolfgang Hennerbichler <wolfgang.hennerbichler@xxxxxxxxxxxxxxxx>:
>>>>> On 08/19/2013 12:01 PM, Schmitt, Christian wrote:
>>>>>>> yes. depends on 'everything', but it's possible (though not recommended)
>>>>>>> to run mon, mds, and osd's on the same host, and even do virtualisation.
>>>>>>
>>>>>> Currently we don't want to virtualise on this machine since the
>>>>>> machine is really small, as said we focus on small to midsize
>>>>>> businesses. Most of the time they even need a tower server due to the
>>>>>> lack of a correct rack. ;/
>>>>>
>>>>> whoa :)
>>>>
>>>> Yep that's awful.
>>>>
>>>>>>>> Our Application, Ceph's object storage and a database?
>>>>>>>
>>>>>>> what is 'a database'?
>>>>>>
>>>>>> We run Postgresql or MariaDB (without/with Galera depending on the cluster size)
>>>>>
>>>>> You wouldn't want to put the data of postgres or mariadb on cephfs. I
>>>>> would run the native versions directly on the servers and use
>>>>> mysql-multi-master circular replication. I don't know about similar
>>>>> features of postgres.
>>>>
>>>> No i don't want to put a MariaDB Cluster on CephFS we want to put PDFs
>>>> in CephFS or Ceph's Object Storage and hold a key or path in the
>>>> database, also other things like user management will belong to the
>>>> database
>>>>
>>>>>>> shared nothing is possible with ceph, but in the end this really depends
>>>>>>> on your application.
>>>>>>
>>>>>> hm, when disk fails we already doing some backup on a dell powervault
>>>>>> rd1000, so i don't think thats a problem and also we would run ceph on
>>>>>> a Dell PERC Raid Controller with RAID1 enabled on the data disk.
>>>>>
>>>>> this is open to discussion, and really depends on your use case.
>>>>
>>>> Yeah we definitely know that it isn't good to use Ceph on a single
>>>> node, but i think it's easier to design the application that it will
>>>> depends on ceph. it wouldn't be easy to manage to have a single node
>>>> without ceph and more than 1 node with ceph.
>>>>
>>>>>>>> Currently we make an archiving software for small customers and we want
>>>>>>>> to move things on the file system on a object storage.
>>>>>>>
>>>>>>> you mean from the filesystem to an object storage?
>>>>>>
>>>>>> yes, currently everything is on the filesystem and this is really
>>>>>> horrible, thousands of pdfs just on the filesystem. we can't scale up
>>>>>> that easily with this setup.
>>>>>
>>>>> Got it.
>>>>>
>>>>>> Currently we run on Microsoft Servers, but we plan to rewrite our
>>>>>> whole codebase with scaling in mind, from 1 to X Servers. So 1, 3, 5,
>>>>>> 7, 9, ... X²-1 should be possible.
>>>>>
>>>>> cool.
>>>>>
>>>>>>>> Currently we only
>>>>>>>> have customers that needs 1 machine or 3 machines. But everything should
>>>>>>>> work as fine on more.
>>>>>>>
>>>>>>> it would with ceph. probably :)
>>>>>>
>>>>>> That's nice to hear. I was really scared that we don't find a solution
>>>>>> that can run on 1 system and scale up to even more. We first looked at
>>>>>> HDFS but this isn't lightweight.
>>>>>
>>>>> not only that, HDFS also has a single point of failure.
>>>>>
>>>>>> And the overhead of Metadata etc.
>>>>>> just isn't that cool.
>>>>>
>>>>> :)
>>>>
>>>> Yeah that's why I came to Ceph. I think that's probably the way we want to go.
>>>> Really thank you for your help. It's good to know that I have a
>>>> solution for the things that are badly designed on our current
>>>> solution.
>>>>
>>>>> _______________________________________________
>>>>> ceph-users mailing list
>>>>> ceph-users@xxxxxxxxxxxxxx
>>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>> --
>>> John Wilkins
>>> Senior Technical Writer
>>> Intank
>>> john.wilkins@xxxxxxxxxxx
>>> (415) 425-9599
>>> http://inktank.com



-- 
John Wilkins
Senior Technical Writer
Intank
john.wilkins@xxxxxxxxxxx
(415) 425-9599
http://inktank.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux