Re: SSD-Cache Tier + RBD-Cache = Filesystem corruption?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>Could it be that rbd caching + qemu writeback cache + ceph cach tier
>>writeback are not playing well together?


rbd caching=true is the same than qemu writeback.

Setting cache=writeback in qemu, configure the librbd with rbd cache=true



if you have fs corruption, it seem that flush from guest are not going correctly to the final storage.
I never have had problem with rbd_cache=true .

Maybe it's a bug with ssd cache tier...



----- Mail original -----
De: "Udo Waechter" <root@xxxxxxxxx>
À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Samedi 6 Février 2016 11:31:51
Objet:  SSD-Cache Tier + RBD-Cache = Filesystem corruption?

Hello, 

I am experiencing totally weird filesystem corruptions with the 
following setup: 

* Ceph infernalis on Debian8 
* 10 OSDs (5 hosts) with spinning disks 
* 4 OSDs (1 host, with SSDs) 

The SSDs are new in my setup and I am trying to setup a Cache tier. 

Now, with the spinning disks Ceph is running since about a year without 
any major issues. Replacing disks and all that went fine. 

Ceph is used by rbd+libvirt+kvm with 

rbd_cache = true 
rbd_cache_writethrough_until_flush = true 
rbd_cache_size = 128M 
rbd_cache_max_dirty = 96M 

Also, in libvirt, I have 

cachemode=writeback enabled. 

So far so good. 

Now, I've added the SSD-Cache tier to the picture with "cache-mode 
writeback" 

The SSD-Machine also has "deadline" scheduler enabled. 

Suddenly VMs start to corrupt their filesystems (all ext4) with "Journal 
failed". 
Trying to reboot the machines ends in "No bootable drive" 
Using parted and testdisk on the image mapped via rbd reveals that the 
partition table is gone. 

testdisk finds the proper ones, e2fsck repairs the filesystem beyond 
usage afterwards. 

This does not happen to all machines, It happens to those that actually 
do some or most fo the IO 

elasticsearch, MariaDB+Galera, postgres, backup, GIT 

So I thought, yesterday one of my ldap-servers died, and that one is not 
doing IO. 

Could it be that rbd caching + qemu writeback cache + ceph cach tier 
writeback are not playing well together? 

I've read through some older mails on the list, where people had similar 
problems and suspected somehting like that. 

What are the proper/right settings for rdb/qemu/libvirt? 

libvirt: cachemode=none (writeback?) 
rdb: cache_mode = none 
SSD-tier: cachemode: writeback 

? 

Thanks for any help, 
udo. 


_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux