Re: corrupted rbd filesystems since jewel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Example:
# rbd rm cephstor2/vm-136-disk-1
Removing image: 99% complete...

Stuck at 99% and never completes. This is an image which got corrupted
for an unknown reason.

Greets,
Stefan

Am 04.05.2017 um 08:32 schrieb Stefan Priebe - Profihost AG:
> I'm not sure whether this is related but our backup system uses rbd
> snapshots and reports sometimes messages like these:
> 2017-05-04 02:42:47.661263 7f3316ffd700 -1
> librbd::object_map::InvalidateRequest: 0x7f3310002570 should_complete: r=0
> 
> Stefan
> 
> 
> Am 04.05.2017 um 07:49 schrieb Stefan Priebe - Profihost AG:
>> Hello,
>>
>> since we've upgraded from hammer to jewel 10.2.7 and enabled
>> exclusive-lock,object-map,fast-diff we've problems with corrupting VM
>> filesystems.
>>
>> Sometimes the VMs are just crashing with FS errors and a restart can
>> solve the problem. Sometimes the whole VM is not even bootable and we
>> need to import a backup.
>>
>> All of them have the same problem that you can't revert to an older
>> snapshot. The rbd command just hangs at 99% forever.
>>
>> Is this a known issue - anythink we can check?
>>
>> Greets,
>> Stefan
>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux