librbd::operation::FlattenRequest

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I cannot flatten an image, it always restarts with:

root@xxxxxxxxxxxxxxxxxxxxxxx:~# rbd flatten vm-hdd/vm-104-disk-1
Image flatten: 28% complete...2021-04-29 10:50:27.373 7ff7caffd700 -1 librbd::operation::FlattenRequest: 0x7ff7c4009db0 should_complete: encountered error: (85) Interrupted system call should be restarted Image flatten: 26% complete...2021-04-29 10:50:33.053 7ff7caffd700 -1 librbd::operation::FlattenRequest: 0x7ff7c4008fc0 should_complete: encountered error: (85) Interrupted system call should be restarted Image flatten: 0% complete...2021-04-29 10:50:34.829 7ff7caffd700 -1 librbd::operation::FlattenRequest: 0x7ff7c445b470 should_complete: encountered error: (85) Interrupted system call should be restarted Image flatten: 39% complete...2021-04-29 10:50:42.081 7ff7caffd700 -1 librbd::operation::FlattenRequest: 0x7ff7c40324e0 should_complete: encountered error: (85) Interrupted system call should be restarted Image flatten: 0% complete...2021-04-29 10:50:43.897 7ff7caffd700 -1 librbd::operation::FlattenRequest: 0x7ff7c4018890 should_complete: encountered error: (85) Interrupted system call should be restarted Image flatten: 42% complete...2021-04-29 10:51:07.813 7ff7caffd700 -1 librbd::operation::FlattenRequest: 0x7ff7c402fe80 should_complete: encountered error: (85) Interrupted system call should be restarted Image flatten: 42% complete...2021-04-29 10:51:29.372 7ff7caffd700 -1 librbd::operation::FlattenRequest: 0x7ff7c40017c0 should_complete: encountered error: (85) Interrupted system call should be restarted


root@xxxxxxxxxxxxxxxxxxxxxxx:~# uname -a
Linux sm-node1.in.illusion.hu 5.4.106-1-pve #1 SMP PVE 5.4.106-1 (Fri, 19 Mar 2021 11:08:47 +0100) x86_64 GNU/Linux

root@xxxxxxxxxxxxxxxxxxxxxxx:~# dpkg -l |grep ceph
ii  ceph 14.2.20-pve1                            amd64        distributed storage and file system ii  ceph-base 14.2.20-pve1                            amd64        common ceph daemon libraries and management tools ii  ceph-common 14.2.20-pve1                            amd64        common utilities to mount and interact with a ceph storage cluster ii  ceph-fuse 14.2.20-pve1                            amd64        FUSE-based client for the Ceph distributed file system ii  ceph-mds 14.2.20-pve1                            amd64        metadata server for the ceph distributed file system ii  ceph-mgr 14.2.20-pve1                            amd64        manager for the ceph distributed storage system ii  ceph-mgr-dashboard 14.2.20-pve1                            all          dashboard plugin for ceph-mgr ii  ceph-mon 14.2.20-pve1                            amd64        monitor server for the ceph storage system ii  ceph-osd 14.2.20-pve1                            amd64        OSD server for the ceph storage system ii  libcephfs2 14.2.20-pve1                            amd64        Ceph distributed file system client library ii  python-ceph-argparse 14.2.20-pve1                            all          Python 2 utility libraries for Ceph CLI ii  python-cephfs 14.2.20-pve1                            amd64        Python 2 libraries for the Ceph libcephfs library


root@xxxxxxxxxxxxxxxxxxxxxxx:~# dpkg -l |grep rbd
ii  librbd1 14.2.20-pve1                            amd64        RADOS block device client library ii  python-rbd 14.2.20-pve1                            amd64        Python 2 libraries for the Ceph librbd library

root@xxxxxxxxxxxxxxxxxxxxxxx:~# modinfo rbd
filename: /lib/modules/5.4.106-1-pve/kernel/drivers/block/rbd.ko
license:        GPL
description:    RADOS Block Device (RBD) driver
author:         Jeff Garzik <jeff@xxxxxxxxxx>
author:         Yehuda Sadeh <yehuda@xxxxxxxxxxxxxxx>
author:         Sage Weil <sage@xxxxxxxxxxxx>
author:         Alex Elder <elder@xxxxxxxxxxx>
srcversion:     7BA6FEE20249E416B2D09AB
depends:        libceph
retpoline:      Y
intree:         Y
name:           rbd
vermagic:       5.4.106-1-pve SMP mod_unload modversions
parm:           single_major:Use a single major number for all rbd devices (default: true) (bool)

Please give me advice, how can I produce more information to catch this.

Thank you,

i.

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux