Re: Can we mount cephfs in centos 5.7 ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 6 Jun 2012, Travis Rhoden wrote:
> Thanks folks.
> 
> > * 2.6.18 is crazy-old.
> Certainly.    I'm hoping I only need to work with CentOS6, then I at
> least get 2.6.32.  I wouldn't try to do RBD with these kernels,
> though.  I would be more likely to have an Ubuntu 12.04 node mount an
> RBD and export with NFS.  But as you mention, that has it's own
> issues.
> 
> > Your other options are re-exporting NFS or CIFS, or trying to use ceph-fuse.
> ceph-fuse would only be for use with cephfs, no?  Or can ceph-fuse
> interface to RBD or radosgw?

Just the fs.
 
> I'm probably just slightly ahead of the curve here.  I am trying to
> shoehorn ceph into my scenario (exporting a single filesystem to lots
> of cluster nodes) without using cephfs since it's not quite ready.
> Even though I lose parallel file access when using NFS (all traffic
> would go through that one NFS node), I was hoping that mounting an
> RBD, formatting it, and exporting it with NFS would avoid the ESTALE
> issues (as opposed to using cephfs, mounting via ceph-fuse, and then
> exporting _that_ with NFS).

Oh, yeah, exporting an xfs/ext4/whatever on top of rbd via NFS will 
certainly work!

sage

> 
> I actually have a phone call with inktank tomorrow.  I'll ask some
> more questions then.
> 
>  - Travis
> 
> On Wed, Jun 6, 2012 at 12:27 PM, Sage Weil <sage@xxxxxxxxxxx> wrote:
> > On Wed, 6 Jun 2012, Travis Rhoden wrote:
> >> Sorry for the re-post.  First attempt got eaten by the mailing list
> >> because it wasn't plain text.  My first attempt!
> >>
> >> > With CephFS not quite ready for production, what would be the
> >> > recommended way for sharing Ceph to older operating systems?
> >> > Specifically, I want to share a file system (i.e. mount a common
> >> > directory) on several CentOS5 and CentOS6 nodes.  I don't think I can
> >> > mount an RBD from more than one node, so what is my best bet?
> >> >
> >> > Mount an RBD on a file "head" and export it via NFS?
> >> >
> >> > I feel like that is the only sane choice, but wanted to ask in case I
> >> > am missing something better.
> >
> > Two things:
> >
> > * 2.6.18 is crazy-old.  It's possible the libceph and rbd bits could be
> > merged in, but getting the fs to work with ancient VFS will be a world of
> > pain.
> >
> > * NFS reexport works in basic scenarios, but should not be used in
> > production as there are various conditions that can lead to ESTALE.
> > Making it work is somewhere on the mid-term roadmap, but it'll be a while.
> >
> > Your best bet is to run cephfs on a modern kernel.  But be warned, we
> > aren't ready to support the fs just yet.  Soon!
> >
> > sage
> >
> >
> >>
> >>   - Travis
> >> >
> >> >
> >> > On Wed, Jun 6, 2012 at 9:58 AM, Sam Zaydel <sam.zaydel@xxxxxxxxxxx> wrote:
> >> >>
> >> >> I think you will be able to build from source the client bits that you
> >> >> need, but I saw this from an earlier similar discussion mentioning
> >> >> Python version, so I would make sure that python > 2.5 is used,
> >> >> realistically 2.6 should be the norm today.
> >> >>
> >> >> http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/5717
> >> >>
> >> >> On Tue, Jun 5, 2012 at 6:16 PM,  <Eric_YH_Chen@xxxxxxxxxx> wrote:
> >> >> > Dear All:
> >> >> >
> >> >> >ÿÿÿÿ My ceph cluster is installed on Ubuntu 12.04 with kernel 3.2.0.
> >> >> >
> >> >> >ÿÿÿÿ   But I have a situation that I may need to mount cephfs on centos 5.7, which with kernel 2.6.18.
> >> >> >
> >> >> >    (that is to say , I only need to execute mount.ceph on the centos 5.7, not the whole ceph system)
> >> >> >
> >> >> >    I want to find a solution that can provide reliable and high availability share file system.
> >> >> >
> >> >> >    Is it possible to do this? Or any other recommend way?  Ex:  export NFSÿÿ
> >> >> >
> >> >> >    Thanks!
> >> >> >
> >> >> >
> >> >> > --
> >> >> > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >> >> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> >> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> Inked by Sam Zaydel
> >> >> --
> >> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >> >
> >> >
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>
> >>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux