access ceph filesystem at storage level and not via ethernet

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Ronny! Exactly the info I need. And kinda of what I thought the answer would be as I was typing and thinking clearer about what I was asking. I just was hoping CEPH would work like this since the openstack fuel tools deploy CEPH storage nodes easily.
I agree I would not be using CEPH for its strengths.

I am interested further in what you've said in this paragraph though:

"if you want to have FC SAN attached storage on servers, shareable 
between servers in a usable fashion I would rather mount the same SAN 
lun on multiple servers and use a cluster filesystem like ocfs or gfs 
that is made for this kind of solution."

Please allow me to ask you a few questions regarding that even though it isn't CEPH specific.

Do you mean gfs/gfs2 global file system?

Does ocfs and/or gfs require some sort of management/clustering server to maintain and manage? (akin to a CEPH OSD)
I'd love to find a distributed/cluster filesystem where I can just partition and format. And then be able to mount and use that same SAN datastore from multiple servers without a management server.
If ocfs or gfs do need a server of this sort does it needed to be involved in the I/O? or will I be able to mount the datastore, similar to any other disk and the IO goes across the fiberchannel?

One final question, if you don't mind, do you think I could use ext4or xfs and "mount the same SAN lun on multiple servers" if I can guarantee each server will only right to its own specific directory and never anywhere the other servers will be writing? (I even have the SAN mapped to each server using different lun's)

Thanks for your expertise!

-- Jim

-------------- next part --------------

Message: 27
Date: Wed, 13 Sep 2017 19:56:07 +0200
From: Ronny Aasen <ronny+ceph-users@xxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  access ceph filesystem at storage level and
	not via ethernet
Message-ID: <fe8ec309-7750-ad5c-3fe7-12a62ad6ec3c@xxxxxxxx>
Content-Type: text/plain; charset="windows-1252"; Format="flowed"


a bit cracy :)

if the disks are directly attached on a OSD node, or attachable on 
Fiberchannel does not make a difference.  you can not shortcut the ceph 
cluster and talk to the osd disks directly without eventually destroying 
the ceph cluster.

Even if you did, ceph is an object storage on disk, so you would not 
find filesystem or RBD diskimages there, only objects on your FC 
attached osd node disks with filestore, and with bluestore not even 
readable objects.

that beeing said I think a FC SAN attached ceph osd node sounds a bit 
strange. ceph's strength is the distributed scaleable solution. and 
having the osd nodes collected on a SAN array would nuter ceph's 
strengths, and amplify ceph's weakness of high latency. i would only 
consider such a solution for testing, learning or playing around without 
having actual hardware for a distributed system.  and in that case use 1 
lun for each osd disk, give 8-10 vm's some luns/osd's each, just to 
learn how to work with ceph.

if you want to have FC SAN attached storage on servers, shareable 
between servers in a usable fashion I would rather mount the same SAN 
lun on multiple servers and use a cluster filesystem like ocfs or gfs 
that is made for this kind of solution.


kind regards
Ronny Aasen

On 13.09.2017 19:03, James Okken wrote:
>
> Hi,
>
> Novice question here:
>
> The way I understand CEPH is that it distributes data in OSDs in a 
> cluster. The reads and writes come across the ethernet as RBD requests 
> and the actual data IO then also goes across the ethernet.
>
> I have a CEPH environment being setup on a fiber channel disk array 
> (via an openstack fuel deploy). The servers using the CEPH storage 
> also have access to the same fiber channel disk array.
>
> From what I understand those servers would need to make the RDB 
> requests and do the IO across ethernet, is that correct? Even though 
> with this infrastructure setup there is a ?shorter? and faster path to 
> those disks, via the fiber channel.
>
> Is there a way to access storage on a CEPH cluster when one has this 
> ?better? access to the disks in the cluster? (how about if it were to 
> be only a single OSD with replication set to 1)
>
> Sorry if this question is crazy?
>
> thanks
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux