Re: Modifying Crush map

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lot Christian!!

Regards,
Daleep Singh Bais

On 04/11/2016 11:04 AM, Christian Balzer wrote:
> Hello,
>
> If you mean 3x replication in total over 2 racks/failure domains. that has
> of course come up several times in the past, for example:
> ---
> https://www.mail-archive.com/ceph-users@xxxxxxxxxxxxxx/msg19140.html
> http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2014-September/043039.html
> ---
>
> I don't think you can do this via the CLI, the simple in "create-simple"
> is a clear hint.
>
> I you actually mean 3x replication PER RACK, so 6x replication total,
> that's just a variation of the  of the above, using a pool with 6
> replicas.
>
> Christian
>
> On Mon, 11 Apr 2016 10:06:14 +0530 Daleep Singh Bais wrote:
>
>> Hi All,
>>
>> I am trying to modify a crush map to accommodate replication to two
>> separate racks. I am able to get that done using
>>
>> #*ceph osd crush rule create-simple mytest mycr rack firstn**
>> *
>> To create the replicated pool using modified crush, I use
>>
>> #*ceph osd pool create testpool 64 replicated mytest ; ceph osd pool set
>> testpool size 2**
>> *
>> However, this makes just a single copy of data on each rack i.e using
>> single OSD. How can I attain further 3X replication in separate racks of
>> the same data.
>>
>> I want to do this using commands. (Don't want to modify crush map using
>> decompile and recompile path)
>>
>> Any guidance in this regard will be helpful.
>>
>>
>> Thanks.
>>
>> Daleep Singh Bais
>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux