Re: cephfs and manila

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(Copying ceph-users to share the info more broadly)

On Thu, Nov 24, 2016 at 10:50 AM,  <ren.huanwen@xxxxxxxxxx> wrote:
> Hi John
>
> I have some questions about the use of cephfs,
> Can you help me answer, Thank you!
>
> We built a Openstack(M) file share, and use Manila componets based CephFS.
> I can export CephFS'posix and refer to
> http://docs.openstack.org/developer/manila/devref/cephfs_native_driver.html(CephFS
> Native driver from you, :+1:)
> as we all know:
> use "nfs-ganesha", I can manual export NFS based on CephFS, and not in
> Manila;
> use "samba 4.x", I can manual exprot CIFS based on CephFS, and not in
> Manila.
> but if i want to direct export the CIFS and NFS based on CephFS in Manila,
> seems not support.
> although Manila support Ganesha Library(not support Samba library)
> so will plan to support the above functions?

An NFS+CephFS driver for Manila is a work in progress.  The rough plan
is to have some initial functionality for auto-configuring ganesha
exports using the existing ganesha modules in openstack Ocata, and
then the to have it automatically creating gateway VMs in the
subsequent Pike release.

> Two other issues outside
> 1. ceph-fuse cannot mount cephfs snapshot directory?
>    Example:ceph-fuse /root/mycephfs5 --id=admin --conf=./client.conf
> --keyring=/etc/ceph/ceph.client.admin.keyring
> --client-mountpoint=/volumes/_nogroup/b53cbff4-a3f2-402b-91c2-aaf967f32d40/.snap/

Hmm, we've never tested this, and it probably needs some work because
snapshot dirs are special.  This will be necessary to enable Manila's
mountable snapshots feature:
https://github.com/openstack/manila-specs/blob/master/specs/ocata/mountable-snapshots.rst

I've created at ticket here: http://tracker.ceph.com/issues/18050,
although I'm not sure how soon it would reach the top of anyone's
priority list given the current focus on robustness and multi-mds.

> 2. if i delete cephfs using "ceph fs rm {cephfs_name}", but there is
> cephfs_data_pool and cephfs_meta_pool,
>    Based on the old cephfs_data_pool and cephfs_meta_pool, whether can
> restore cephfs.

This would be considered a disaster recovery situation.  Removing the
filesystem doesn't touch anything in the pools, so you'd do something
like:
 * Stop all MDS daemons
 * Do a "ceph fs new" with the same pools
 * Do a "ceph fs reset" on the new filesystem to make it skip the
creating stage.
 * If you had multiple active MDSs you would probably also need to do
some extra work to truncate journals etc.

You can also use cephfs-data-scan to (best effort) scrape files
directly out of a cephfs data pool to some other filesystem.

The usual warnings for disaster recovery tools all apply to the
situation of trying to recover a cephfs filesystem from pools: this is
a last resort, they can do harm as well as good, and anyone unsure
should seek expert advice before using them.

John
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux