Thanks Greg, No i am more into large scale RADOS system not filesystem .
however for geographic distributed datacentres specially when network flactuate how to handle that as i read it seems CEPH need big pipe of network
/Zee
On Fri, Jan 9, 2015 at 7:15 PM, Gregory Farnum <greg@xxxxxxxxxxx> wrote:
On Thu, Jan 8, 2015 at 5:46 AM, Zeeshan Ali Shah <zashah@xxxxxxxxxx> wrote:
> I just finished configuring ceph up to 100 TB with openstack ... Since we
> are also using Lustre in our HPC machines , just wondering what is the
> bottle neck in ceph going on Peta Scale like Lustre .
>
> any idea ? or someone tried it
If you're talking about people building a petabyte Ceph system, there
are *many* who run clusters of that size. If you're talking about the
Ceph filesystem as a replacement for Lustre at that scale, the concern
is less about the raw amount of data and more about the resiliency of
the current code base at that size...but if you want to try it out and
tell us what problems you run into we will love you forever. ;)
(The scalable file system use case is what actually spawned the Ceph
project, so in theory there shouldn't be any serious scaling
bottlenecks. In practice it will depend on what kind of metadata
throughput you need because the multi-MDS stuff is improving but still
less stable.)
-Greg
Zeeshan Ali Shah
System Administrator - PDC HPC
PhD researcher (IT security)
Kungliga Tekniska Hogskolan
+46 8 790 9115
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com