Hello everyone, Ceph is a distributed file system designed for performance, reliability, and scalability. Basic features include: * POSIX semantics * Seamless scaling from 1 to many thousands of nodes * No single point of failure * N-way replication of data across storage nodes * Fast recovery from node failures * Automatic rebalancing of data on node addition/removal * Easy deployment: most FS components are userspace daemons * Linux kernel client, FUSE-based client, and user library Notable in this release is a reasonably stable Linux kernel client. It can extract and compile a kernel, and passes all tests in the fstest POSIX regression test suite posted a few weeks back. It has stabilized to the point where it could use some broader testing and code review. More info at http://ceph.newdream.net Source code at git://ceph.newdream.net/ceph.git http://ceph.newdream.net/git Since v0.1: * fully functional (and reasonably stable) kernel client * NFS re-export of a ceph client mount * client metadata leases to keep client cache coherent * crushtool for managing storage cluster topology * improved support for storage cluster expansion * some new tools for mkfs and management * lots and lots of bug fixes... Planned for v0.3: * xattrs * hardening distributed failure recovery * large directory support (in client) * recursive mtime and file size accounting Some key things on the (medium- to long-term) roadmap: * locks * quotas * btrfs for local object storage on storage nodes * directory-granularity snapshots * content-addressible storage * strong security sage -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html