Re: RBD I/O errors with QEMU [luminous upgrade/osd change]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Definitely would love to see some debug-level logs (debug rbd = 20 and
debug objecter = 20) for any VM that experiences this issue. The only
thing I can think of is something to do with sparse object handling
since (1) krbd doesn't perform sparse reads and (2) re-importing the
file would eliminate intra-object sparseness if using a pre-Luminous
rbd CLI.

On Mon, Sep 11, 2017 at 9:31 AM, Nico Schottelius
<nico.schottelius@xxxxxxxxxxx> wrote:
>
> Sarunas,
>
> may I ask when this happened?
>
> And did you move OSDs or mons after that export/import procecdure?
>
> I really wonder, what is the reason for this behaviour and also if it is
> likely to experience it again.
>
> Best,
>
> Nico
>
> Sarunas Burdulis <sarunas@xxxxxxxxxxxxxxxxxx> writes:
>
>> On 2017-09-10 08:23, Nico Schottelius wrote:
>>>
>>> Good morning,
>>>
>>> yesterday we had an unpleasant surprise that I would like to discuss:
>>>
>>> Many (not all!) of our VMs were suddenly
>>> dying (qemu process exiting) and when trying to restart them, inside the
>>> qemu process we saw i/o errors on the disks and the OS was not able to
>>> start (i.e. stopped in initramfs).
>>
>> We experienced the same after upgrade from kraken to luminous, i.e. all
>> VM with their system images in Ceph pool failed to boot due to
>> filesystem errors, ending in initramfs. fsck wasn't able to fix them.
>>
>>> When we exported the image from rbd and loop mounted it, there were
>>> however no I/O errors and the filesystem could be cleanly mounted [-1].
>>
>> Same here.
>>
>> We ended up rbd-exporting images from Ceph rbd pool to local filesystem
>> and re-exporting them back. That "fixed" them without the need for fsck.
>
>
> --
> Modern, affordable, Swiss Virtual Machines. Visit www.datacenterlight.ch
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux