Re: alternative approaches to CEPH-FS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I also would like to see cephfs stable, especially  with the snapshot function.
I tried to figure out the roadmap but couldn't get a clear picture?
Is there a target date for production-ready snapshot-functionality?

until than a possible alternative (sorry without ceph :-/)
is using glusterfs which can be really fast.
2 years ago I had a setup utilizing raid6 bricks consisting each of 7+1hotspare disks (1TB sata)
several of them in a gluster stripe connected via 4GB FC (needed to be cheap ;-) to a server (debian)
that exported the space via samba.
I liked it because it was:
- really fast
- really robust
- as cheap as poss (for huge productive data IMHO)
- easy to set up and maintain
- smoothly scalable ad infinitum
(one can start with one server, one raid-array than grow for volume and redundancy/off-site replication)

big drawback: no snapshots, no easy readonly / cow functionality,
that's what I hope cephfs will bring us!
I tried it since some days, and it works, mds hasn't crashed (yet ;-)
it took 2TB of data with acceptable performance - BUT
erasing that data is a no go :-(  13MB/s??

Again, is there any roadmap on cephfs (incl. snaps?)

best regards
Bernhard





Actually #3 is a novel idea, I have not thought of it. Thinking about the difference just off the top of my head though, comparatively, #3 will have 

1) more overheads (because of the additional VM)

2) Can't grow once you reach the hard limit of 14TB, and if you have multiple of such machines, then fragmentation becomes a problem

3) might have the risk of 14TB partition corruption wiping out all your shares

4) not as easy as HA. Although I have not worked HA into NFSCEPH yet, it should be doable by drdb-ing the NFS data directory, or any other techniques that people use for redundant NFS servers.

- WP


On Fri, Nov 15, 2013 at 10:26 PM, Gautam Saxena <gsaxena@xxxxxxxxxxx> wrote:
Yip,

I went to the link. Where can the script ( nfsceph) be downloaded? How's the robustness and performance of this technique? (That is, is there are any reason to believe that it would more/less robust and/or performant than option #3 mentioned in the original thread?)


On Fri, Nov 15, 2013 at 1:57 AM, YIP Wai Peng <yipwp@xxxxxxxxxxxxxxx> wrote:
On Fri, Nov 15, 2013 at 12:08 AM, Gautam Saxena <gsaxena@xxxxxxxxxxx> wrote:

We are now running this - basically an intermediate/gateway node that mounts ceph rbd objects and exports them as NFS. http://waipeng.wordpress.com/2013/11/12/nfsceph/

- WP


--


Bernhard Glomm

Network & System Administrator

Ecologic Institute

bernhard.glomm@xxxxxxxxxxx

www.ecologic.eu


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux