Re: Basic Ceph Questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/05/2014 11:03 PM, Lindsay Mathieson wrote:
> Morning all ..
> 
> I have a simple 3 node 2 osd cluster setup serving VM Images (proxmox). The 
> two OSD's are on the two VM hosts. Size is set to 2 for replication on both 
> OSD's. SSD journals. 
> 
> 
> - if the Ceph Client (VM quest over RBD) is accessing data that is stored on 
> the local OSD, will it avoid hitting the network and just access the local 
> drive? From monitoring the network bond that seems to be the case.
> 

No, it will not. The CRUSHMap tells a client to read data from a
specific location.

You can tune this with localization in the CRUSHMap, but in the basics
it will read from the node that CRUSH tells it to read from.

> 
> - if I added an extra OSD to the local node would that same client then use it 
> to stripe reads, improving the the read transfer rate?
> 

Not persé that local OSD for striping, but data will be migrated and the
new disk will be used to gain more performance.

> 
> - Geo Replication - thats done via federated gateways? looks complicated :(
>   * The remote slave, it would be read only?
> 

That is only for the RADOS Gateway. Ceph itself (RADOS) does not support
Geo Replication.

> - Disaster strikes, apart from DR backups how easy is it to recover your data 
> off ceph OSD's? one of the things I liked about gluster was that if I totally 
> screwed up the gluster masters, I could always just copy the data off the 
> filesystem. Not so much with ceph.
> 

It's a bit harder with Ceph. Eventually it is doable, but that is
something that would take a lot of time.

> 
> - Am I abusing ceph? :) I just have a small 3 node VM server cluster with 20 
> windows VM;s, some servers, some VDI. The shared store is a QNAP nas which is 
> struggling. I'm using ceph for
> - Shared Storage
> - Replication/Redundancy
> - Improved performance
> 

I think that 3 nodes is not sufficient, Ceph really starts performing
when you go >10 nodes (excluding monitors).

> Its serving all of this, but the complexity concerns me sometimes.
> 

Storage always is a complex thing, Ceph is no exception on that.

Wido

> Thanks,
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux