Re: 答复: How's cephfs going?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Tue, Jul 18, 2017 at 6:54 AM, Blair Bethwaite <blair.bethwaite@xxxxxxxxx> wrote:
We are a data-intensive university, with an increasingly large fleet
of scientific instruments capturing various types of data (mostly
imaging of one kind or another). That data typically needs to be
stored, protected, managed, shared, connected/moved to specialised
compute for analysis. Given the large variety of use-cases we are
being somewhat more circumspect it our CephFS adoption and really only
dipping toes in the water, ultimately hoping it will become a
long-term default NAS choice from Luminous onwards.

On 18 July 2017 at 15:21, Brady Deetz <bdeetz@xxxxxxxxx> wrote:
> All of that said, you could also consider using rbd and zfs or whatever filesystem you like. That would allow you to gain the benefits of scaleout while still getting a feature rich fs. But, there are some down sides to that architecture too.

We do this today (KVMs with a couple of large RBDs attached via
librbd+QEMU/KVM), but the throughput able to be achieved this way is
nothing like native CephFS - adding more RBDs doesn't seem to help
increase overall throughput. Also, if you have NFS clients you will
absolutely need SSD ZIL. And of course you then have a single point of
failure and downtime for regular updates etc.

In terms of small file performance I'm interested to hear about
experiences with in-line file storage on the MDS.

Also, while we're talking about CephFS - what size metadata pools are
people seeing on their production systems with 10s-100s millions of
files?

On a system with 10.1 million files, metadata pool is 60MB

 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux