Re: access ceph filesystem at storage level and not via ethernet

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 13.09.2017 19:03, James Okken wrote:

Hi,

 

Novice question here:

 

The way I understand CEPH is that it distributes data in OSDs in a cluster. The reads and writes come across the ethernet as RBD requests and the actual data IO then also goes across the ethernet.

 

I have a CEPH environment being setup on a fiber channel disk array (via an openstack fuel deploy). The servers using the CEPH storage also have access to the same fiber channel disk array.

 

From what I understand those servers would need to make the RDB requests and do the IO across ethernet, is that correct? Even though with this infrastructure setup there is a “shorter” and faster path to those disks, via the fiber channel.

 

Is there a way to access storage on a CEPH cluster when one has this “better” access to the disks in the cluster? (how about if it were to be only a single OSD with replication set to 1)

 

Sorry if this question is crazy…

 

thanks


a bit cracy :)

if the disks are directly attached on a OSD node, or attachable on Fiberchannel does not make a difference.  you can not shortcut the ceph cluster and talk to the osd disks directly without eventually destroying the ceph cluster.

Even if you did, ceph is an object storage on disk, so you would not find filesystem or RBD diskimages there, only objects on your FC attached osd node disks with filestore, and with bluestore not even readable objects.

that beeing said I think a FC SAN attached ceph osd node sounds a bit strange. ceph's strength is the distributed scaleable solution. and having the osd nodes collected on a SAN array would nuter ceph's strengths, and amplify ceph's weakness of high latency. i would only consider such a solution for testing, learning or playing around without having actual hardware for a distributed system.  and in that case use 1 lun for each osd disk, give 8-10 vm's some luns/osd's each, just to learn how to work with ceph. 

if you want to have FC SAN attached storage on servers, shareable between servers in a usable fashion I would rather mount the same SAN lun on multiple servers and use a cluster filesystem like ocfs or gfs that is made for this kind of solution.


kind regards
Ronny Aasen
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux