I too find Ceph fuse more stable. However, you really should do your tests with a much more recent kernel ! 3.10 is old. I think there is Ceph improvements in every kernel version since a long time. -- Thomas Lemarchand Cloud Solutions SAS - Responsable des systèmes d'information On jeu., 2014-12-18 at 14:52 +1000, Lindsay Mathieson wrote: > I'be been experimenting with CephFS for funning KVM images (proxmox). > > cephfs fuse version - 0.87 > > cephfs kernel module - kernel version 3.10 > > > Part of my testing involves running a Windows 7 VM up and running > CrystalDiskMark to check the I/O in the VM. Its surprisingly good with > both the fuse and the kernel driver, seq reads & writes are actually > faster than the underlying disk, so I presume the FS is aggressively > caching. > > With the fuse driver I have no problems. > > With the kernel driver, the benchmark runs fine, but when I reboot the > VM the drive is corrupted and unreadable, every time. Rolling back to > a snapshot fixes the disk. This does not happen unless I run the > benchmark, which I presume is writing a lot of data. > > No problems with the same test for Ceph rbd, or NFS. > > -- > Lindsay > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com