Having a live disk and a DR disk in each node doesn't work. Ceph places its data very differently than to expect that the 2 drives in each node would be a copy of each other. I'm realizing now that you were talking about rbd mirroring as being something that would keep a second copy of an OSD disk on another disk. RBD mirroring is a very different feature to that and has nothing to do with physical hardware.
Ceph has redundancy by keeping multiple copies of data and keeping each copy as far away from the others as you tell it. The default is to have 3 copies of your data and to keep each copy on a different server. You can change that to more copies of your data being kept. You can also change that so that each copy will be kept in a different rack from the other copies. You can do this in the inverse where it will only keep your copies on separate disks and can have multiple copies in the same host... just whatever you need.
There is no such thing as a DR copy of a disk in Ceph. Ceph is built to be able to recover from its own disasters by keeping enough copies of your data separate from each other in different hosts. Recovering from the loss of a data drive in Ceph is common and handled very well. Some people use RAID 1 to mitigate the need to rebalance, but I see that as waste of space (at least for all of my use cases).
On Mon, Aug 14, 2017 at 5:15 PM Oscar Segarra <oscar.segarra@xxxxxxxxx> wrote:
Hi,In my test/lab I'm working with a single node just for testing functionality... once it works, I will expand this configuration up to 10 nodes.I'd like all hosts having a drive for live data and the other for DR and backup to tape.Thanks a lot._______________________________________________2017-08-14 22:27 GMT+02:00 Jason Dillaman <jdillama@xxxxxxxxxx>:Personally, I didn't quite understand your use-case. You only have a
single host and two drives (one for live data and the other for DR)?
On Mon, Aug 14, 2017 at 4:09 PM, Oscar Segarra <oscar.segarra@xxxxxxxxx> wrote:
> Hi,
>
> Anybody has been able to work with mirroring?
>
> does has any sense the scenario I'm proposing?
>
> Thanks a lot.
>
> 2017-08-08 20:05 GMT+02:00 Oscar Segarra <oscar.segarra@xxxxxxxxx>:
>>
>> Hi,
>>
>> I'd like to use the mirroring feature
>>
>> http://docs.ceph.com/docs/master/rbd/rbd-mirroring/
>>
>> In my environment I have just one host (at the moment for testing purposes
>> before production deployment).
>>
>> I want to dispose:
>>
>> /dev/sdb for standard operatoin
>> /dev/sdc for mirror
>>
>> Of course, I'd like to create two clusters, each cluster with a pool
>> "mypool" and enable mirror.
>>
>> The final idea is using CephFS for exporting to tape my VMs in consistent
>> state without affecting production OSD /dev/sdb.
>>
>> http://docs.ceph.com/docs/master/cephfs/createfs/
>>
>> Anybody has tried something similar? can anybody explain his experience?
>>
>> Thanks a lot.
>>
>>
>>
>
>
--> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
Jason
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com