Re: Spark/Mesos on top of Ceph/Btrfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sebastien Han <sebastien.han@...> writes:


> What do you want to use from Ceph? RBD? CephFS?

(I hope this post is not redundant I still seem to be
having some troubles posting to this group and I'm using 
gmane)




Hello one and all,

I am suppose to be able to post to this group via gmane, but
I'm not seeing the postings via gmane:

http://news.gmane.org/gmane.comp.file-systems.ceph.user

Maybe my application to this group did not get processed?


Long version (hopefully more clear)?

I want a distributed, heterogeneous cluster, without Hadoop. Spark
(in-memory) processing [1] of large FEM [2] (Finite Element [math] Methods)
is the daunting application that will be used in all sorts of scientific
simulations with very large  datasets. This will also include rendering some
very complex 3D video/simulations of  fluid type flows [3]. Hopefully the
simulations will be computed and rendered in real time (less that 200ms of
latency). Surely other massive types of scientific simulations can benefit
from Spark/mesos/cephfs/btrfs, imho.

Also being able to use the cluster for routine distcc compilations,
Continuous Integration [4], log file processing, security scans and most
other forms of routine server usage as of keen interest too.


Eventually, the cluster(s) will use both the GPUs, x64 and the new Arm_64
bit processors, all with as much ram as possible. This is a long journey,
but I believe that cephfs on top of btrfs will eventually mature into the
robust solution that is necessary.

The other portions of the solution like Distribute Features (Chronos,
ansible/puppet/chef, DB  etc etc will also be needed, but there does seem to
be an abundance of choices for those needs; so discussion
is warmly received in these areas too, as they relate to cephfs/btrfs.


Cephfs  on top of Btrfs is the most challenging part of this journey so
far. I use openrc on gentoo, and have no interest in systemd, just so
you know.


James

[1] https://amplab.cs.berkeley.edu/

[2] http://dune.mathematik.uni-freiburg.de/

[3] http://www.opengeosys.org/

[4] http://www.zentoo.org/ 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux