Fwd: does still not recommended place rbd device on nodes, where osd daemon located?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello

Thank for your attention, and  sorry for my bad english!

In my draft architecture, i want use same hardware for osd and rbd
devices. In other words, i have 5 nodes this 5TB software raid on each
Disk space. I want build on this nodes, ceph cluster. All 5 nodes will
be run OSD and, on the same 5 node i will start 3 mons for QUORUM.
Also on the same 5 nodes i will start cluster stack (pacemaker +
corocync) with follow configuration

node ceph-precie-64-01
node ceph-precie-64-02
node ceph-precie-64-03
node ceph-precie-64-04
node ceph-precie-64-05
primitive samba_fs ocf:heartbeat:Filesystem \
        params device="-U cb4d3dda-92e9-4bd8-9fbc-
2940c096e8ec" directory="/mnt" fstype="ext4"
primitive samba_rbd ocf:ceph:rbd \
        params name="samba"
group samba samba_rbd samba_fs
property $id="cib-bootstrap-options" \
        dc-version="1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="3" \
        stonith-enabled="false" \
        no-quorum-policy="stop" \
        last-lrm-refresh="1352806660"


So rbd block device can be fault tolerant. In my case, use separate
machine for rbd not appropriated :-( (use multiple machines only for
rbd fault tollerant is too much cost )

2012/11/22 Dan Mick <dan.mick@xxxxxxxxxxx>:
> Still not certain I'm understanding *just* what you mean, but I'll point out
> that you can set up a cluster with rbd images, mount them from a separate
> non-virtualized host with kernel rbd, and expand those images and take
> advantage of the newly-available space on the separate host, just as though
> you were expanding a RAID device.  Maybe that fits your use case, Ruslan?
>
>
> On 11/21/2012 12:05 PM, ruslan usifov wrote:
>>
>> Yes i mean exactly this. it's a great pity :-( Maybe present some ceph
>> equivalent that solve my problem?
>>
>> 2012/11/21 Gregory Farnum <greg@xxxxxxxxxxx>:
>>>
>>> On Wed, Nov 21, 2012 at 4:33 AM, ruslan usifov <ruslan.usifov@xxxxxxxxx>
>>> wrote:
>>>>
>>>> So, not possible use ceph as scalable block device without
>>>> visualization?
>>>
>>>
>>> I'm not sure I understand, but if you're trying to take a bunch of
>>> compute nodes and glue their disks together, no, that's not a
>>> supported use case at this time. There are a number of deadlock issues
>>> caused by this sort of loopback; it's the same reason you shouldn't
>>> mount NFS on the server host.
>>> We may in the future manage to release an rbd-fuse client that you can
>>> use to do this with a little less pain, but it's not ready at this
>>> point.
>>> -Greg
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux