Re: access ceph filesystem at storage level and not via ethernet

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks again Ronny,
Ocfs2 is working well so far.
I have 3 nodes sharing the same 7TB MSA FC lun. Hoping to add 3 more...

James Okken
Lab Manager
Dialogic Research Inc.
4 Gatehall Drive
Parsippany
NJ 07054
USA

Tel:       973 967 5179
Email:   james.okken@xxxxxxxxxxxx
Web:    www.dialogic.com - The Network Fuel Company

This e-mail is intended only for the named recipient(s) and may contain information that is privileged, confidential and/or exempt from disclosure under applicable law. No waiver of privilege, confidence or otherwise is intended by virtue of communication via the internet. Any unauthorized use, dissemination or copying is strictly prohibited. If you have received this e-mail in error, or are not named as a recipient, please immediately notify the sender and destroy all copies of this e-mail.

-----Original Message-----
From: Ronny Aasen [mailto:ronny+ceph-users@xxxxxxxx] 
Sent: Thursday, September 14, 2017 4:18 AM
To: James Okken; ceph-users@xxxxxxxxxxxxxx
Subject: Re:  access ceph filesystem at storage level and not via ethernet

On 14. sep. 2017 00:34, James Okken wrote:
> Thanks Ronny! Exactly the info I need. And kinda of what I thought the answer would be as I was typing and thinking clearer about what I was asking. I just was hoping CEPH would work like this since the openstack fuel tools deploy CEPH storage nodes easily.
> I agree I would not be using CEPH for its strengths.
> 
> I am interested further in what you've said in this paragraph though:
> 
> "if you want to have FC SAN attached storage on servers, shareable 
> between servers in a usable fashion I would rather mount the same SAN 
> lun on multiple servers and use a cluster filesystem like ocfs or gfs 
> that is made for this kind of solution."
> 
> Please allow me to ask you a few questions regarding that even though it isn't CEPH specific.
> 
> Do you mean gfs/gfs2 global file system?
> 
> Does ocfs and/or gfs require some sort of management/clustering server 
> to maintain and manage? (akin to a CEPH OSD) I'd love to find a distributed/cluster filesystem where I can just partition and format. And then be able to mount and use that same SAN datastore from multiple servers without a management server.
> If ocfs or gfs do need a server of this sort does it needed to be involved in the I/O? or will I be able to mount the datastore, similar to any other disk and the IO goes across the fiberchannel?

i only have experience with ocfs. but i think gfs works similarish. 
There are quite a few cluster filesystems to choose from. 
https://en.wikipedia.org/wiki/Clustered_file_system

servers that are mounting ocfs shared filesystems must have ocfs2-tools installed. have access to the common shared FC lun via FC.  they need to be aware of the other ocfs servers of the same lun, that you define in a /etc/ocfs/cluster.conf configfile and the ocfs daemon must be running.

then it is just a matter of making the ocfs (on one server) and adding it to fstab (of all servers) and mount.


> One final question, if you don't mind, do you think I could use ext4or xfs and "mount the same SAN lun on multiple servers" if I can guarantee each server will only right to its own specific directory and never anywhere the other servers will be writing? (I even have the SAN mapped to each server using different lun's)

mounting the same (non cluster) filesystem on multiple servers is 
guaranteed to destroy the filesystem, you will have multiple servers 
writing in the same metadata area, the same journal area and generaly 
shitting over each other. luckily i think most modern filesystems would 
detect that the FS is mounted somewhere else and prevent you from 
mounting it again without big fat warnings.

kind regards
Ronny Aasen

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux