Re: RBD I/O errors with QEMU [luminous upgrade/osd change]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sarunas,

may I ask when this happened?

And did you move OSDs or mons after that export/import procecdure?

I really wonder, what is the reason for this behaviour and also if it is
likely to experience it again.

Best,

Nico

Sarunas Burdulis <sarunas@xxxxxxxxxxxxxxxxxx> writes:

> On 2017-09-10 08:23, Nico Schottelius wrote:
>>
>> Good morning,
>>
>> yesterday we had an unpleasant surprise that I would like to discuss:
>>
>> Many (not all!) of our VMs were suddenly
>> dying (qemu process exiting) and when trying to restart them, inside the
>> qemu process we saw i/o errors on the disks and the OS was not able to
>> start (i.e. stopped in initramfs).
>
> We experienced the same after upgrade from kraken to luminous, i.e. all
> VM with their system images in Ceph pool failed to boot due to
> filesystem errors, ending in initramfs. fsck wasn't able to fix them.
>
>> When we exported the image from rbd and loop mounted it, there were
>> however no I/O errors and the filesystem could be cleanly mounted [-1].
>
> Same here.
>
> We ended up rbd-exporting images from Ceph rbd pool to local filesystem
> and re-exporting them back. That "fixed" them without the need for fsck.


--
Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux