Hi, I have a ceph cluster with 5 nodes, in which 2 are MDS, 3 are MON and all 5 acts as OSD. I have mounted the ceph cluster in one node in the cluster and exported the mounted dir via NFS. Below is my mount and exports file looks like ceph-fuse on /ceph_cluster type fuse.ceph-fuse (rw,nosuid,nodev,allow_other,default_permissions) [root@ceph-node-15 ~]# cat /etc/exports /ceph_cluster *(rw,no_root_squash,fsid=10001) Below is automount entry madhusudhan_ceph - rw,intr,retrans=10,timeo=600,hard,rsize=32768,wsize=32768,tcp,noacl ceph- node-15:/ceph_cluster/madhusudhana_ceph I am facing strange issue with one of my t_make build, where its failing for some unknown reason. But the same build works fine on local machine and build gets completed. There are no difference in the data as its been synced from perforce to both the directories. Can someone put some lime light on the best way to mount the ceph cluster via NFS (using autofs to mount the dir) ? and is there anything that i need to make sure for mounting ceph cluster via NFS ? I heard that, t_make will fail if underlying file system can't handle 64b file handles [inode number/fileid] (faced the same issue with isilon storage). Can ceph handle above condition ? Any help/input greatly appreciated. Thanks -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html