Re: CephFS First product release discussion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I think stable mds daemon and fsck or a way to recover some of the data once the mds crash is the only thing we need.

We are using ceph as a very big fs for doing nightly backups of our 3000+ servers. We have some front servers doing rsync over slow adsl lines, saving all data on a very big cephfs mount. We have some kind of versioning (with rsync --link-dest) and a custom software over all of this that allows the user to copy his files or program uploads of the data.

We need to scale the storage quickly and to be able to recover one server or disk malfunction searching minimum downtime. We don't need a lot of speed (since the data lines we are using are slow).

It seems ceph is the perfect choice, but perhaps a Plan B for recovering part of our data in case a catastrohpic failure arises is our most needed feature.

Regards.

--
Félix Ortega Hortigüela


On Wed, Mar 6, 2013 at 6:01 AM, Neil Levine <neil.levine@xxxxxxxxxxx> wrote:
As an extra request, it would be great if people explained a little
about their use-case for the filesystem so we can better understand
how the features requested map to the type of workloads people are
trying.

Thanks

Neil

On Tue, Mar 5, 2013 at 9:03 AM, Greg Farnum <greg@xxxxxxxxxxx> wrote:
> This is a companion discussion to the blog post at http://ceph.com/dev-notes/cephfs-mds-status-discussion/ — go read that!
>
> The short and slightly alternate version: I spent most of about two weeks working on bugs related to snapshots in the MDS, and we started realizing that we could probably do our first supported release of CephFS and the related infrastructure much sooner if we didn't need to support all of the whizbang features. (This isn't to say that the base feature set is stable now, but it's much closer than when you turn on some of the other things.) I'd like to get feedback from you in the community on what minimum supported feature set would prompt or allow you to start using CephFS in real environments — not what you'd *like* to see, but what you *need* to see. This will allow us at Inktank to prioritize more effectively and hopefully get out a supported release much more quickly! :)
>
> The current proposed feature set is basically what's left over after we've trimmed off everything we can think to split off, but if any of the proposed included features are also particularly important or don't matter, be sure to mention them (NFS export in particular — it works right now but isn't in great shape due to NFS filehandle caching).
>
> Thanks,
> -Greg
>
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux