Reproducable Data Corruption with cephfs kernel driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'be been experimenting with CephFS for funning KVM images (proxmox).

cephfs fuse version - 0.87

cephfs kernel module - kernel version 3.10


Part of my testing involves running a Windows 7 VM up and running
CrystalDiskMark to check the I/O in the VM. Its surprisingly good with
both the fuse and the kernel driver, seq reads & writes are actually
faster than the underlying disk, so I presume the FS is aggressively
caching.

With the fuse driver I have no problems.

With the kernel driver, the benchmark runs fine, but when I reboot the
VM the drive is corrupted and unreadable, every time. Rolling back to
a snapshot fixes the disk. This does not happen unless I run the
benchmark, which I presume is writing a lot of data.

No problems with the same test for Ceph rbd, or NFS.


-- 
Lindsay
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux