Re: Starting a cluster with one OSD node

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>>>> On Friday, May 13, 2016, Mike Jacobacci <mikej@xxxxxxxxxx> wrote:
>>>>> Hello,
>>>>>
>>>>> I have a quick and probably dumb question… We would like to use Ceph
>>>>> for our storage, I was thinking of a cluster with 3 Monitor and OSD
>>>>> nodes.  I was wondering if it was a bad idea to start a Ceph cluster
>>>>> with just one OSD node (10 OSDs, 2 SSDs), then add more nodes as our
>>>>> budget allows?  We want to spread out the purchases of the OSD nodes
>>>>> over a month or two but I would like to start moving data over ASAP.
>>>>
>>>> Hi Mike,
>>>>
>>>> Production or test?  I would strongly recommend against one OSD node
>>>> in production.  Not only risk of hang and data loss due to e.g.
>>>> Filesystem issue or kernel, but also as you add nodes the data
>>>> movement will introduce a good deal of overhead.
>> On May 14, 2016, at 9:56 AM, Christian Balzer <chibi@xxxxxxx> wrote:
>>
>> On Sat, 14 May 2016 09:46:23 -0700 Mike Jacobacci wrote:
>>
>>
>> Hello,
>>
>>> Hi Alex,
>>>
>>> Thank you for your response! Yes, this is for a production
>>> environment... Do you think the risk of data loss due to the single node
>>> be different than if it was an appliance or a Linux box with raid/zfs?
>> Depends.
>>
>> Ceph by default distributes 3 replicas amongst the storage nodes, giving
>> you fault tolerances along the lines of RAID6.
>> So (again by default), the smallest cluster you want to start with is 3
>> nodes.
>>
>> OF course you could modify the CRUSH rules to place 3 replicas based on
>> OSDs, not nodes.
>>
>> However that only leaves you with 3 disks worth of capacity in your case
>> and still the data movement Alex mentioned when adding more nodes AND
>> modifying the CRUSH rules.
>>
>> Lastly I personally wouldn't deploy anything that's a SPoF in production.
>>
>> Christian
On Sat, May 14, 2016 at 1:08 PM, Mike Jacobacci <mikej@xxxxxxxxxx> wrote:
> Hi Christian,
>
> Thank you, I know what I am asking isn't a good idea... I am just trying to avoid waiting for all three nodes before I began virtualizing our infrastructure.
>
> Again thanks for the responses!

Hi Mike,

I generally do not build production environments on one node for
storage, although my group has built really good test/training
environments with a single box.  What we do there is forgo ceph
altogether and install a hardware RAID with your favorite RAID HBA
vendor - LSI/Avago, Areca, Adaptec, etc. and export it using SCST as
iSCSI.  This setup has worked really well so far for its intended use.

We tested a two box setup with LSI/Avago SyncroCS, which works well
too and there are some good howtos on the web for this - but it seems
SyncroCS has been put on ice by Avago, unfortunately.

Regarding building a one node setup in ceph and then expanding it, I
would not do this.  It is easier to do things right up front than to
redo later.  What you may want to do is use this one node to become
familiar with the ceph architecture and do a dry run - however, I
would wipe clean and recreate the environment rather than promote it
to production.  Silly operator errors have come up in the past, like
leaving OSD level redundancy instead of setting node redundancy.
Also, big data migrations are hard on clients (you can see IO
timeouts), as discussed often on this list.  So YMMV, but I personally
would not rush.

Best regards,
Alex
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux