Hi, Ceph is a distributed file system designed for performance, reliability, and scalability. Basic features include: * POSIX semantics * Seamless scaling from 1 to many thousands of nodes, petabytes of storage * No single point of failure * N-way replication of data across storage nodes * Fast recovery from node failures * Automatic rebalancing of data on node addition/removal * Easy deployment: most FS components are userspace daemons New in this release: * Flexible snapshots (create snapshots of _any_ subdirectory) * Recursive accounting for size, ctime, file counts * Lots of client bug fixes and improvements, including asynchronous writepages, additional crc protection of network messages, sendpage (zero-copy writes where supported). The main new item in this release is the snapshot support. Unlike snapshots in most other file systems, Ceph snapshots are not volume-wide; they can be created on a per-subdirectory (tree) basis. That is, you can do something like $ cd /ceph $ mkdir foo/.snap/foo_snap $ ls foo/.snap foo_snap $ mkdir foo/bar/.snap/bar_snap $ ls foo/bar/.snap _1223284321_foo_snap # parents' snaps are preceeded by parent's ino # bar_snap A read-only view of the subdirectory's content at the time of snapshot creation is available from the virtual .snap/$snapname directory. Snapshots include accurate recursive accounting statistics (like rsize, which reflects the total size of all files nested beneath a directory, and is reported by default as a directory's st_size). For example, $ cd test $ tar jxf something.tar.bz2 & $ mkdir .snap/1 $ mkdir .snap/2 $ killall %1 $ ls -al .snap total 0 drwxr-xr-x 1 root root 0 Jan 1 1970 . # virtual ".snap" dir drwxr-xr-x 1 root root 3590037 Oct 7 20:36 .. # the "live" dir is biggest drwxr-xr-x 1 root root 1220238 Oct 7 20:36 1 drwxr-xr-x 1 root root 2366114 Oct 7 20:36 2 Snapshot removal is as simple as $ rmdir foo/.snap/foo_snap The kernel client has stabilized significantly in the last few months. The next release will focus on improving the failure recovery behavior of the storage cloud (mainly, throttling recovery and snap removal versus client workloads), responding intelligently to partial failures (EIO on individual file objects), coping with ENOSPC conditions, and general stability improvements. More information at http://ceph.newdream.net/ Source at git clone git://ceph.newdream.net/ceph.git Thanks- sage -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html