Re: [ceph-users] Status of snapshots in CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 19, 2014 at 5:25 PM, Sage Weil <sweil@xxxxxxxxxx> wrote:
> On Fri, 19 Sep 2014, Florian Haas wrote:
>> Hello everyone,
>>
>> Just thought I'd circle back on some discussions I've had with people
>> earlier in the year:
>>
>> Shortly before firefly, snapshot support for CephFS clients was
>> effectively disabled by default at the MDS level, and can only be
>> enabled after accepting a scary warning that your filesystem is highly
>> likely to break if snapshot support is enabled. Has any progress been
>> made on this in the interim?
>>
>> With libcephfs support slowly maturing in Ganesha, the option of
>> deploying a Ceph-backed userspace NFS server is becoming more
>> attractive -- and it's probably a better use of resources than mapping
>> a boatload of RBDs on an NFS head node and then exporting all the data
>> from there. Recent snapshot trimming issues notwithstanding, RBD
>> snapshot support is reasonably stable, but even so, making snapshot
>> data available via NFS, that way, is rather ugly. In addition, the
>> libcephfs/Ganesha approach would obviously include much better
>> horizontal scalability.
>
> We haven't done any work on snapshot stability.  It is probably moderately
> stable if snapshots are only done at the root or at a consistent point in
> the hierarcy (as opposed to random directories), but there are still some
> basic problems that need to be resolved.  I would not suggest deploying
> this in production!  But some stress testing woudl as always be very
> welcome.  :)

OK, on a semi-related note: is there any reasonably current
authoritative list of features that are supported and unsupported in
either ceph-fuse or kernel cephfs, and if so, at what minimal version?

The most comprehensive overview that seems to be available is one from
Greg, which however is a year and a half old:

http://ceph.com/dev-notes/cephfs-mds-status-discussion/

>> In addition, https://github.com/nfs-ganesha/nfs-ganesha/wiki/ReleaseNotes_2.0#CEPH
>> states:
>>
>> "The current requirement to build and use the Ceph FSAL is a Ceph
>> build environment which includes Ceph client enhancements staged on
>> the libwipcephfs development branch. These changes are expected to be
>> part of the Ceph Firefly release."
>>
>> ... though it's not clear whether they ever did make it into firefly.
>> Could someone in the know comment on that?
>
> I think this is referring to the libcephfs API changes that the cohortfs
> folks did.  That all merged shortly before firefly.

Great, thanks for the clarification.

> By the way, we have some basic samba integration tests in our regular
> regression tests, but nothing based on ganesha.  If you really want this
> to the work, the most valuable thing you could do would be to help
> get the tests written and integrated into ceph-qa-suite.git.  Probably the
> biggest piece of work there is creating a task/ganesha.py that installs
> and configures ganesha with the ceph FSAL.

Hmmm, given the excellent writeup that Niels de Vos of Gluster fame
wrote about this topic, I might actually be able to cargo-cult some of
what's in the Samba task and adapt it for ganesha.

Sorry while I'm being ignorant about Teuthology: what platform does it
normally run on? I ask because I understand most of your testing is done
on Ubuntu, and Ubuntu currently doesn't ship a Ganesha package, which
would make the install task a bit more complex.

Cheers,
Florian
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux