> The data distribution > issue has turned out to be a practical non-issue for GlusterFS > users. Sure, if you have very few "elephant objects" on very few > small-ish bricks (our equivalent of OSDs) then you can get skewed > distribution. On the other hand, that problem *very* quickly > solves itself for even moderate object and brick counts, to the > point that almost no users have found it useful to enable striping. > Has your experience been different, or do you not know because > striping is mandatory instead of optional? We have indeed an example of a case where striping is needed here at CERN : we are starting to test rados as a backend for the disk cache of our mass storage system (understand tape backend). There, files can indeed be really big (up to TB level) and we need parallel accesses to be able to feed our tape drives to their limit of > 250MB/s. The solution has been to implement a layer of striping on top of rados that "hides" the striping while basically keeping the rados interface and (most of) the consistency and locking. This is currently not integrated in the ceph mainstream but is available and ready for merge. See blue print at https://wiki.ceph.com/Planning/Blueprints/Firefly/Object_striping_in_librados#section_4 and implementation/pull request at https://github.com/ceph/ceph/pull/1186. By the way, it needs some review :-) Sebastien _______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-devel