On 2011-02-03, Sage Weil <sage@xxxxxxxxxxxx> wrote: > There are a couple of levels of difficulty. The main problem is that the > only truly stable information in the NFS fh is the inode number, and > Ceph's architecture simply doesn't support lookup-by-ino. (It uses an > extra table to support it for hard-linked files, under the assumption that > these are relatively rare in the real world.) Sorry for the thread hijack, but just so this issue doesn't completely fall through the cracks... There are different "real worlds" where hard links are very, very common. Although, admittedly, ceph may well not be targeted at those parallel universes. Backup servers are a classic example. It's very common to have hard-links between the files for each snapshot. In this situation *most* files have multiple hard links, and you can easily have almost all files with 60 or more hard links (for 60 or more snapshots). Rsnapshot, BackupPC and apparently OSX's Time Machine for instance work this way. Of course, apps like this would probably be far better off if they started using proper snapshots (and dedup, if/when that becomes available, praise the day) provided by file systems such as ceph and btrfs. But there are other apps which use significant numbers of hard links, some examples buried in this thread: http://thread.gmane.org/gmane.comp.file-systems.btrfs/3427 Cheers, Chris -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html