Re: rbd: error processing image xxx (2) No such file or directory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

If you run "rbd snap ls --all", you should see a snapshot in
the "trash" namespace.

I just tried the command "rbd snap ls --all" on a lab cluster (nautilus) and get this error:

ceph-2:~ # rbd snap ls --all
rbd: image name was not specified

Are there any requirements I haven't noticed? This lab cluster was upgraded from Mimic a couple of weeks ago.

ceph-2:~ # ceph version
ceph version 14.1.0-559-gf1a72cff25 (f1a72cff2522833d16ff057ed43eeaddfc17ea8a) nautilus (dev)

Regards,
Eugen


Zitat von Jason Dillaman <jdillama@xxxxxxxxxx>:

On Tue, Apr 2, 2019 at 4:19 AM Nikola Ciprich
<nikola.ciprich@xxxxxxxxxxx> wrote:

Hi,

on one of my clusters, I'm getting error message which is getting
me a bit nervous.. while listing contents of a pool I'm getting
error for one of images:

[root@node1 ~]# rbd ls -l nvme > /dev/null
rbd: error processing image  xxx: (2) No such file or directory

[root@node1 ~]# rbd info nvme/xxx
rbd image 'xxx':
    size 60 GiB in 15360 objects
    order 22 (4 MiB objects)
    id: 132773d6deb56
    block_name_prefix: rbd_data.132773d6deb56
    format: 2
    features: layering, operations
    op_features: snap-trash
    flags:
    create_timestamp: Wed Aug 29 12:25:13 2018

volume contains production data and seems to be working correctly (it's used
by VM)

is this something to worry about? What is snap-trash feature? wasn't able to google
much about it..

This implies that you are (or were) using transparent image clones and
that you deleted a snapshot that had one or more child images attached
to it. If you run "rbd snap ls --all", you should see a snapshot in
the "trash" namespace. You can also list its child images by running
"rbd children --snap-id <id from snap ls> <image-spec>".

There definitely is an issue w/ the "rbd ls --long" command in that
when it attempts to list all snapshots in the image, it is incorrectly
using the snapshot's name instead of it's ID. I've opened a tracker
ticket to get the bug fixed [1]. It was fixed in Nautilus but it
wasn't flagged for backport to Mimic.

I'm running ceph 13.2.4 on centos 7.

I'd be gratefull any help

BR

nik


--
-------------------------------------
Ing. Nikola CIPRICH
LinuxBox.cz, s.r.o.
28.rijna 168, 709 00 Ostrava

tel.:   +420 591 166 214
fax:    +420 596 621 273
mobil:  +420 777 093 799
www.linuxbox.cz

mobil servis: +420 737 238 656
email servis: servis@xxxxxxxxxxx
-------------------------------------
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[1] http://tracker.ceph.com/issues/39081

--
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux