Have you updated to ceph-iscsi-config-2.4-1 and ceph-iscsi-cli-2.6-1? Any error messages in /var/log/rbd-target-api.log? On Wed, Feb 14, 2018 at 8:49 AM, Steven Vacaroaia <stef97@xxxxxxxxx> wrote: > Thank you for the prompt response > > I was unable to install rtslib even AFTER I installed latest version of > python-pydev ( 0.21) > > git clone git://github.com/pyudev/pyudev.git > > pyudev]# pip install --upgrade . > Processing /root/pyudev > Collecting six (from pyudev==0.21.0dev-20180214) > Downloading six-1.11.0-py2.py3-none-any.whl > Installing collected packages: six, pyudev > Found existing installation: six 1.9.0 > Uninstalling six-1.9.0: > Successfully uninstalled six-1.9.0 > Found existing installation: pyudev 0.15 > Uninstalling pyudev-0.15: > Successfully uninstalled pyudev-0.15 > Running setup.py install for pyudev ... done > Successfully installed pyudev-0.21.0.dev20180214 six-1.11.0 > > rpm -Uvh python-rtslib-2.1.fb67-1.noarch.rpm > error: Failed dependencies: > python-pyudev >= 0.16.1 is needed by python-rtslib-2.1.fb67-1.noarch > python2-pyudev is needed by python-rtslib-2.1.fb67-1.noarch > > > Further more it appears that despite the fact error 500 is still occurring > when creating disks, if I restart rbd services the disks are there > > > Any ideas ? > > > /disks> create pool=rbd image=image04 size=50G > Failed : 500 INTERNAL SERVER ERROR > /disks> ls > o- disks > .......................................................................................................... > [200G, Disks: 4] > o- rbd.image01 > ................................................................................................... > [image01 (50G)] > o- rbd.image02 > ................................................................................................... > [image02 (50G)] > o- rbd.image03 > ................................................................................................... > [image03 (50G)] > o- ssdpool.ssdtest > ............................................................................................... > [ssdtest (50G)] > /disks> exit > [root@osd01 latest]# systemctl restart rbd-target-api.service > [root@osd01 latest]# systemctl restart rbd-target-gw.service > /disks> ls > o- disks > .......................................................................................................... > [250G, Disks: 5] > o- rbd.image01 > ................................................................................................... > [image01 (50G)] > o- rbd.image02 > ................................................................................................... > [image02 (50G)] > o- rbd.image03 > ................................................................................................... > [image03 (50G)] > o- rbd.image04 > ................................................................................................... > [image04 (50G)] > o- ssdpool.ssdtest > ............................................................................................... > [ssdtest (50G)] > > > > On 13 February 2018 at 14:56, Jason Dillaman <jdillama@xxxxxxxxxx> wrote: >> >> It looks that that package was configured to auto-delete on shaman. >> I've submitted a fix so it shouldn't happen again in the future, but >> in the meantime I pushed and built python-rtslib-2.1.fb67-1 [1]. >> >> [1] https://shaman.ceph.com/repos/python-rtslib/ >> >> On Tue, Feb 13, 2018 at 2:09 PM, Steven Vacaroaia <stef97@xxxxxxxxx> >> wrote: >> > Hi, >> > >> > I noticed a new ceph kernel (4.15.0-ceph-g1c778f43da52) was made >> > available >> > so I have upgraded my test environment >> > >> > Now the iSCSI gateway stopped working - >> > ERROR [rbd-target-api:1430:call_api()] - _disk change on osd02 failed >> > with >> > 500 >> > >> > So I was thinking that I have to pudate all the packages >> > I have downloaded the packages from the same repository but all cannot >> > install ( some of) them >> > >> > Here is the list of packages >> > >> > root@osd01 latest]# ls -al >> > total 300772 >> > drwxr-xr-x 2 root root 4096 Feb 13 13:25 . >> > dr-xr-x---. 17 root root 4096 Feb 13 13:09 .. >> > -rw-r--r-- 1 root root 98312 Jan 22 09:58 >> > ceph-iscsi-cli-2.5-83.g777c38a.el7.noarch.rpm >> > -rw-r--r-- 1 root root 83560 Jan 22 09:58 >> > ceph-iscsi-config-2.3-39.g44546a1.el7.noarch.rpm >> > -rw-r--r-- 1 root root 307086748 Jan 22 08:57 >> > kernel-4.13.0_ceph_g293073e5ae00-2.x86_64.rpm >> > -rw-r--r-- 1 root root 1890 Jan 22 08:57 >> > libtcmu-devel-v1.3.0-v1.3.0.x86_64.rpm >> > -rw-r--r-- 1 root root 146637 Jan 22 08:57 >> > libtcmu-v1.3.0-v1.3.0.x86_64.rpm >> > -rw-r--r-- 1 root root 83302 Feb 13 13:23 >> > python2-pyudev-0.21.0-4.fc27.noarch.rpm >> > -rw-r--r-- 1 root root 37316 Feb 13 13:24 >> > python2-six-1.11.0-2.fc28.noarch.rpm >> > -rw-r--r-- 1 root root 166520 Jan 22 09:57 >> > python-rtslib-v2.1.fb62-35.g3183121.noarch.rpm >> > -rw-r--r-- 1 root root 255974 Jan 22 08:57 >> > tcmu-runner-v1.3.0-v1.3.0.x86_64.rpm >> > >> > >> > Here is the error message >> > rpm -Uvh ceph-iscsi-config-2.3-39.g44546a1.el7.noarch.rpm >> > error: Failed dependencies: >> > python-rtslib >= 2.1 is needed by >> > ceph-iscsi-config-2.3-39.g44546a1.el7.noarch >> > [root@osd01 latest]# rpm -Uvh >> > python-rtslib-v2.1.fb62-35.g3183121.noarch.rpm >> > ceph-iscsi-config-2.3-39.g44546a1.el7.noarch.rpm >> > ceph-iscsi-cli-2.5-83.g777c38a.el7.noarch.rpm >> > error: Failed dependencies: >> > python-pyudev >= 0.16.1 is needed by >> > python-rtslib-v2.1.fb62-35.g3183121.noarch >> > python2-pyudev is needed by >> > python-rtslib-v2.1.fb62-35.g3183121.noarch >> > python-rtslib >= 2.1 is needed by >> > ceph-iscsi-config-2.3-39.g44546a1.el7.noarch >> > python-rtslib >= 2.1 is needed by >> > ceph-iscsi-cli-2.5-83.g777c38a.el7.noarch >> > >> > >> > It will be appreciated if someone can provide instructions / stpes for >> > upgrading the kernel without breaking any other functionality' >> > >> > Thanks >> > Steven >> > >> > _______________________________________________ >> > ceph-users mailing list >> > ceph-users@xxxxxxxxxxxxxx >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >> > >> >> >> >> -- >> Jason > > -- Jason _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com