Re: Reproducable Data Corruption with cephfs kernel driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 17, 2014 at 8:52 PM, Lindsay Mathieson
<lindsay.mathieson@xxxxxxxxx> wrote:
> I'be been experimenting with CephFS for funning KVM images (proxmox).
>
> cephfs fuse version - 0.87
>
> cephfs kernel module - kernel version 3.10
>
>
> Part of my testing involves running a Windows 7 VM up and running
> CrystalDiskMark to check the I/O in the VM. Its surprisingly good with
> both the fuse and the kernel driver, seq reads & writes are actually
> faster than the underlying disk, so I presume the FS is aggressively
> caching.
>
> With the fuse driver I have no problems.
>
> With the kernel driver, the benchmark runs fine, but when I reboot the
> VM the drive is corrupted and unreadable, every time. Rolling back to
> a snapshot fixes the disk. This does not happen unless I run the
> benchmark, which I presume is writing a lot of data.
>
> No problems with the same test for Ceph rbd, or NFS.

Do you have any information about *how* the drive is corrupted; what
part Win7 is unhappy with? I don't know how Proxmox configures it, but
I assume you're storing the disk images as single files on the FS?

I'm really not sure what the kernel client could even do here, since
if you're not rebooting the host as well as the VM then it can't be
losing any of the data it's given. :/
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux