Re: Basic Ceph Questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Wed, Nov 5, 2014 at 11:57 PM, Wido den Hollander <wido@xxxxxxxx> wrote:
On 11/05/2014 11:03 PM, Lindsay Mathieson wrote:

>
> - Geo Replication - thats done via federated gateways? looks complicated :(
>   * The remote slave, it would be read only?
>

That is only for the RADOS Gateway. Ceph itself (RADOS) does not support
Geo Replication.


That is only for the RADOS Gateway. Ceph itself (RADOS) does not support
Geo Replication.

The 3 services built on top of RADOS support backups, but RADOS itself does not.  For RDB, you can use snapshot diffs, and ship them offsite (see various threads on the ML).  For RadosGW, there is Federation.  For CephFS, you can use traditional POSIX filesystem backup tools.

 

> - Disaster strikes, apart from DR backups how easy is it to recover your data
> off ceph OSD's? one of the things I liked about gluster was that if I totally
> screwed up the gluster masters, I could always just copy the data off the
> filesystem. Not so much with ceph.
>

It's a bit harder with Ceph. Eventually it is doable, but that is
something that would take a lot of time.

In practice, not really.  Out of curiosity, I attempted this for some RadosGW objects.  It was easy when there was a single object less than 4MB.  It very quickyl became complicated with a few larger objects.  You'd have to have a very deep understanding of the service to track all of the information down with the cluster offline.

It's definitely possible, just not practical.

 

>
> - Am I abusing ceph? :) I just have a small 3 node VM server cluster with 20
> windows VM;s, some servers, some VDI. The shared store is a QNAP nas which is
> struggling. I'm using ceph for
> - Shared Storage
> - Replication/Redundancy
> - Improved performance
>

I think that 3 nodes is not sufficient, Ceph really starts performing
when you go >10 nodes (excluding monitors).

If it meets your needs, then it's working.  :-)

You're going to spend a lot more time managing the 3 node Ceph cluster than you spent on the QNAP.  If it doesn't make sense for you to spent a lot of time dealing with storage, then a single shared store with more IOPS would be a better fit.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux