Re: Ceph rdb question about possibilities

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

yes I missunderstand things I things. Thanks for your help!

So the situation is next: We have a yum repo with rpm packages. We want to store these rpm's in Ceph. But we have three main node what can be able to install the other nodes. So we have to share these rpm packages between the three host. I check the CephFs, it will be the best solution for us, but it is in beta and beta products cant allowed for us. This is why I tryto find another, Ceph based solution. 



2016-01-28 12:46 GMT+01:00 Jan Schermer <jan@xxxxxxxxxxx>:
This is somewhat confusing.

CephFS is a shared filesystem - you mount that on N hosts and they can access the data simultaneously.
RBD is a block device, this block device can be accesses from more than 1 host, BUT you need to use a cluster aware filesystem (such as GFS2, OCFS).

Both CephFS and RBD use RADOS as a backend, which is responsible for data placement, high-availability and so on.

If you explain your scenario more we could suggest some options - do you really need to have the data accessible on more servers, or is a (short) outage acceptable when one server goes down? What type of data do you need to share and how will the data be accessed?

Jan

> On 28 Jan 2016, at 11:06, Sándor Szombat <szombat.sandor@xxxxxxxxx> wrote:
>
> Hello all!
>
> I check the Ceph FS, but it is beta now unfortunatelly. I start to meet with the rdb. It is possible to create an image in a pool, mount it as a block device (for example /dev/rbd0), and format this as HDD, and mount it on 2 host? I tried to make this, and it's work but after mount the  /dev/rbd0 on the two host and I tried to put files into these mounted folders it can't refresh automatically between hosts.
> So the main question: this will be a possible solution?
> (The task: we have 3 main node what can install the other nodes with ansible, and we want to store our rpm's in ceph it is possible. This is necessary because of high avability.)
>
> Thanks for your help!
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux