Re: rbd.ReadOnlyImage: [errno 30]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you all for you quick answer.
I think that will solve our problem.

This is what we came up with this :
rbd -c /etc/ceph/Oceph.conf --keyring /etc/ceph/Oceph.client.admin.keyring export rbd/disk_test - | rbd -c /etc/ceph/Nceph.conf --keyring /etc/ceph/Nceph.client.admin.keyring import - rbd/disk_test

This rbd image is a test with only 5Gb of datas inside of it.

Unfortunately the command seems to be stuck and nothing happens, both ports 7800 / 6789 / 22.

We can't find no logs on any monitors.

Thanks !

-----Message d'origine-----
De : ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> De la part de Jason Dillaman
Envoyé : 04 June 2019 14:14
À : 解决 <zhanrongzhen89@xxxxxxx>
Cc : ceph-users <ceph-users@xxxxxxxxxxxxxx>
Objet : Re:  rbd.ReadOnlyImage: [errno 30]

On Tue, Jun 4, 2019 at 4:55 AM 解决 <zhanrongzhen89@xxxxxxx> wrote:
>
> Hi all,
>     We use ceph(luminous) + openstack(queens) in my test 
> environment。The virtual machine does not start properly after the 
> disaster test and the image of virtual machine can not create snap.The 
> procedure is as follows:
> #!/usr/bin/env python
>
> import rados
> import rbd
> with rados.Rados(conffile='/etc/ceph/ceph.conf',rados_id='nova') as cluster:
>     with cluster.open_ioctx('vms') as ioctx:
>         rbd_inst = rbd.RBD()
>         print "start open rbd image"
>         with rbd.Image(ioctx, '10df4634-4401-45ca-9c57-f349b78da475_disk') as image:
>             print "start create snapshot"
>             image.create_snap('myimage_snap1')
>
> when i run it ,it show readonlyimage,as follows:
>
> start open rbd image
> start create snapshot
> Traceback (most recent call last):
>   File "testpool.py", line 17, in <module>
>     image.create_snap('myimage_snap1')
>   File "rbd.pyx", line 1790, in rbd.Image.create_snap 
> (/builddir/build/BUILD/ceph-12.2.5/build/src/pybind/rbd/pyrex/rbd.c:15
> 682)
> rbd.ReadOnlyImage: [errno 30] error creating snapshot myimage_snap1 
> from 10df4634-4401-45ca-9c57-f349b78da475_disk
>
> but i run it with admin instead of nova,it is ok.
>
> "ceph auth list"  as follow
>
> installed auth entries:
>
> osd.1
> key: AQBL7uRcfuyxEBAAoK8JrQWMU6EEf/g83zKJjg==
> caps: [mon] allow profile osd
> caps: [osd] allow *
> osd.10
> key: AQCV7uRcdsB9IBAAHbHHCaylVUZIPKFX20polQ==
> caps: [mon] allow profile osd
> caps: [osd] allow *
> osd.11
> key: AQCW7uRcRIMRIhAAbXfLbQwijEO5ZQFWFZaO5w==
> caps: [mon] allow profile osd
> caps: [osd] allow *
> osd.2
> key: AQBL7uRcfFMWDBAAo7kjQobGBbIHYfZkx45pOw==
> caps: [mon] allow profile osd
> caps: [osd] allow *
> osd.4
> key: AQBk7uRc97CPOBAAK9IBJICvchZPc5p80bISsg==
> caps: [mon] allow profile osd
> caps: [osd] allow *
> osd.5
> key: AQBk7uRcOdqaORAAkQeEtYsE6rLWLPhYuCTdHA==
> caps: [mon] allow profile osd
> caps: [osd] allow *
> osd.7
> key: AQB97uRc+1eRJxAA34DImQIMFjzHSXZ25djp0Q==
> caps: [mon] allow profile osd
> caps: [osd] allow *
> osd.8
> key: AQB97uRcFilBJhAAXzSzNJsgwpobC8654Xo7Sw==
> caps: [mon] allow profile osd
> caps: [osd] allow *
> client.admin
> key: AQAU7uRcNia+BBAA09mOYdX+yJWbLCjcuMih0A==
> auid: 0
> caps: [mds] allow
> caps: [mgr] allow *
> caps: [mon] allow *
> caps: [osd] allow *
> client.cinder
> key: AQBp7+RcOzPHGxAA7azgyayVu2RRNWJ7JxSJEg==
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx 
> pool=volumes, allow rwx pool=volumes-cache, allow rwx pool=vms, allow 
> rwx pool=vms-cache, allow rx pool=images, allow rx pool=images-cache 
> client.cinder-backup
> key: AQBq7+RcVOwGNRAAiwJ59ZvAUc0H4QkVeN82vA==
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx 
> pool=backups, allow rwx pool=backups-cache client.glance
> key: AQDf7uRc32hDBBAAkGucQEVTWqnIpNvihXf/Ng==
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx 
> pool=images, allow rwx pool=images-cache client.nova
> key: AQDN7+RcqDABIxAAXnFcVjBp/S5GkgOy0wqB1Q==
> caps: [mon] allow r
> caps: [osd] allow class-read object_prefix rbd_children, allow rwx 
> pool=volumes, allow rwx pool=volumes-cache, allow rwx pool=vms, allow 
> rwx pool=vms-cache, allow rwx pool=images, allow rwx pool=images-cache 
> client.radosgw.gateway
> key: AQAU7uRccP06CBAA6zLFtDQoTstl8CNclYRugQ==
> auid: 0
> caps: [mon] allow rwx
> caps: [osd] allow rwx
> mgr.172.30.126.26
> key: AQAr7uRclc52MhAA+GWCQEVnAHB01tMFpgJtTQ==
> caps: [mds] allow *
> caps: [mon] allow profile mgr
> caps: [osd] allow *
> mgr.172.30.126.27
> key: AQAs7uRclkD2OBAAW/cUhcZEebZnQulqVodiXQ==
> caps: [mds] allow *
> caps: [mon] allow profile mgr
> caps: [osd] allow *
> mgr.172.30.126.28
> key: AQAu7uRcT9OLBBAAZbEjb/N1NnZpIgfaAcThyQ==
> caps: [mds] allow *
> caps: [mon] allow profile mgr
> caps: [osd] allow *
>
>
> Can someone explain it to me?

Your clients don't have the correct caps. See [1] or [2].


> thanks!!
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[1] http://docs.ceph.com/docs/mimic/releases/luminous/#upgrade-from-jewel-or-kraken
[2] http://docs.ceph.com/docs/luminous/rbd/rados-rbd-cmds/#create-a-block-device-user

--
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux