Re: How to sync data on different server but with the same image

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Eric,
If you export the rbd device directly via your iSCSI target driver, it
should work.
I verified this with the SCST target, but the LIO target should work as well.
As Wido said, you don't want to mount the same rbd device on multiple
clients without a shared filesystem (and in that case, you might as
well use cephfs), but exporting rbd over iSCSI works.


On Tue, Dec 6, 2011 at 1:38 AM,  <Eric_YH_Chen@xxxxxxxxxxx> wrote:
> Hi, Wido:
>
>   This is a preliminary experiment before implement iSCSI High Available multipath.
>        http://ceph.newdream.net/wiki/ISCSI
>
>   Therefore, we use Ceph as rbd device instead of file system.
>
>
> -----Original Message-----
> From: Wido den Hollander [mailto:wido@xxxxxxxxx]
> Sent: Tuesday, December 06, 2011 5:33 PM
> To: Eric YH Chen/WHQ/Wistron; ceph-devel@xxxxxxxxxxxxxxx
> Cc: Chris YT Huang/WHQ/Wistron
> Subject: Re: How to sync data on different server but with the same image
>
> Hi,
>
> ----- Original message -----
>> Dear All:
>>
>>       I map the same rbd image to the rbd device on two different servers.
>>
>>     For example:
>>           1. create rbd image named foo
>>           2. map foo to /dev/rbd0 on server A,   mount /dev/rbd0 to /mnt
>>           3. map foo to /dev/rbd0 on server B,   mount /dev/rbd0 to /mnt
>>
>>       If I put add a file to /mnt via server A, I hope I can see the same
>> file on server B.
>>       However, I can't see it until I umount /mnt on server A and re-mount
>> /mnt on server B.
>
> You'd have to use an cluster filesystem like GFS or OCFS2 to let this work.
>
> But why not use Ceph as a filesystem instead of RBD? That seems to do what you want.
>
> Wido
>
>>
>>       Do you have any comment about this scenario? How could I force the
>> data synchronization?
>>
>>       Actually, I want to implement the iSCSI High Available multipath on
>> http://ceph.newdream.net/wiki/ISCSI.
>>       Therefore, I tried this small experiment first, but fail. Would you
>> please give me some suggestion before I start to implement it? Thanks!
>>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux