Re: Jewel upgrade - rbd errors after upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What OS are you using?  It actually sounds like the plugins were
updated, the Infernalis OSD was reset, and then the Jewel OSD was
installed.

On Sun, Jun 5, 2016 at 10:42 PM, Adrian Saul
<Adrian.Saul@xxxxxxxxxxxxxxxxx> wrote:
>
> Thanks Jason.
>
> I don’t have anything specified explicitly for osd class dir.   I suspect it might be related to the OSDs being restarted during the package upgrade process before all libraries are upgraded.
>
>
>> -----Original Message-----
>> From: Jason Dillaman [mailto:jdillama@xxxxxxxxxx]
>> Sent: Monday, 6 June 2016 12:37 PM
>> To: Adrian Saul
>> Cc: ceph-users@xxxxxxxxxxxxxx
>> Subject: Re:  Jewel upgrade - rbd errors after upgrade
>>
>> Odd -- sounds like you might have Jewel and Infernalis class objects and
>> OSDs intermixed. I would double-check your installation and see if your
>> configuration has any overload for "osd class dir".
>>
>> On Sun, Jun 5, 2016 at 10:28 PM, Adrian Saul
>> <Adrian.Saul@xxxxxxxxxxxxxxxxx> wrote:
>> >
>> > I have traced it back to an OSD giving this error:
>> >
>> > 2016-06-06 12:18:14.315573 7fd714679700 -1 osd.20 23623 class rbd open
>> > got (5) Input/output error
>> > 2016-06-06 12:19:49.835227 7fd714679700  0 _load_class could not open
>> > class /usr/lib64/rados-classes/libcls_rbd.so (dlopen failed):
>> > /usr/lib64/rados-classes/libcls_rbd.so: undefined symbol:
>> > _ZN4ceph6buffer4list8iteratorC1EPS1_j
>> >
>> > Trying to figure out why that is the case.
>> >
>> >
>> >> -----Original Message-----
>> >> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf
>> >> Of Adrian Saul
>> >> Sent: Monday, 6 June 2016 11:11 AM
>> >> To: dillaman@xxxxxxxxxx
>> >> Cc: ceph-users@xxxxxxxxxxxxxx
>> >> Subject: Re:  Jewel upgrade - rbd errors after upgrade
>> >>
>> >>
>> >> No - it throws a usage error - if I add a file argument after it works:
>> >>
>> >> [root@ceph-glb-fec-02 ceph]# rados -p glebe-sata get
>> >> rbd_id.hypervtst-
>> >> lun04 /tmp/crap
>> >> [root@ceph-glb-fec-02 ceph]# cat /tmp/crap 109eb01f5f89de
>> >>
>> >> stat works:
>> >>
>> >> [root@ceph-glb-fec-02 ceph]# rados -p glebe-sata stat
>> >> rbd_id.hypervtst-
>> >> lun04
>> >> glebe-sata/rbd_id.hypervtst-lun04 mtime 2016-06-06 10:55:08.000000,
>> >> size 18
>> >>
>> >>
>> >> I can do a rados ls:
>> >>
>> >> [root@ceph-glb-fec-02 ceph]# rados ls -p glebe-sata|grep rbd_id
>> >> rbd_id.cloud2sql-lun01
>> >> rbd_id.glbcluster3-vm17
>> >> rbd_id.holder   <<<  a create that said it failed while I was debugging this
>> >> rbd_id.pvtcloud-nfs01
>> >> rbd_id.hypervtst-lun05
>> >> rbd_id.test02
>> >> rbd_id.cloud2sql-lun02
>> >> rbd_id.fiotest2
>> >> rbd_id.radmast02-lun04
>> >> rbd_id.hypervtst-lun04
>> >> rbd_id.cloud2fs-lun00
>> >> rbd_id.radmast02-lun03
>> >> rbd_id.hypervtst-lun00
>> >> rbd_id.cloud2sql-lun00
>> >> rbd_id.radmast02-lun02
>> >>
>> >>
>> >> > -----Original Message-----
>> >> > From: Jason Dillaman [mailto:jdillama@xxxxxxxxxx]
>> >> > Sent: Monday, 6 June 2016 11:00 AM
>> >> > To: Adrian Saul
>> >> > Cc: ceph-users@xxxxxxxxxxxxxx
>> >> > Subject: Re:  Jewel upgrade - rbd errors after upgrade
>> >> >
>> >> > Are you able to successfully run the following command successfully?
>> >> >
>> >> > rados -p glebe-sata get rbd_id.hypervtst-lun04
>> >> >
>> >> >
>> >> >
>> >> > On Sun, Jun 5, 2016 at 8:49 PM, Adrian Saul
>> >> > <Adrian.Saul@xxxxxxxxxxxxxxxxx> wrote:
>> >> > >
>> >> > > I upgraded my Infernalis semi-production cluster to Jewel on Friday.
>> >> > > While
>> >> > the upgrade went through smoothly (aside from a time wasting
>> >> > restorecon /var/lib/ceph in the selinux package upgrade) and the
>> >> > services continued running without interruption.  However this
>> >> > morning when I went to create some new RBD images I am unable to do
>> >> > much at all
>> >> with RBD.
>> >> > >
>> >> > > Just about any rbd command fails with an I/O error.   I can run
>> >> > showmapped but that is about it - anything like an ls, info or
>> >> > status fails.  This applies to all my pools.
>> >> > >
>> >> > > I can see no errors in any log files that appear to suggest an
>> >> > > issue.  I  have
>> >> > also tried the commands on other cluster members that have not done
>> >> > anything with RBD before (I was wondering if perhaps the kernel rbd
>> >> > was pinning the old library version open or something) but the same
>> >> > error
>> >> occurs.
>> >> > >
>> >> > > Where can I start trying to resolve this?
>> >> > >
>> >> > > Cheers,
>> >> > >  Adrian
>> >> > >
>> >> > >
>> >> > > [root@ceph-glb-fec-01 ceph]# rbd ls glebe-sata
>> >> > > rbd: list: (5) Input/output error
>> >> > > 2016-06-06 10:41:31.792720 7f53c06a2d80 -1 librbd: error listing
>> >> > > image in directory: (5) Input/output error
>> >> > > 2016-06-06 10:41:31.792749 7f53c06a2d80 -1 librbd: error listing
>> >> > > v2
>> >> > > images: (5) Input/output error
>> >> > >
>> >> > > [root@ceph-glb-fec-01 ceph]# rbd ls glebe-ssd
>> >> > > rbd: list: (5) Input/output error
>> >> > > 2016-06-06 10:41:33.956648 7f90de663d80 -1 librbd: error listing
>> >> > > image in directory: (5) Input/output error
>> >> > > 2016-06-06 10:41:33.956672 7f90de663d80 -1 librbd: error listing
>> >> > > v2
>> >> > > images: (5) Input/output error
>> >> > >
>> >> > > [root@ceph-glb-fec-02 ~]# rbd showmapped
>> >> > > id pool       image                 snap device
>> >> > > 0  glebe-sata test02                -    /dev/rbd0
>> >> > > 1  glebe-ssd  zfstest               -    /dev/rbd1
>> >> > > 10 glebe-sata hypervtst-lun00       -    /dev/rbd10
>> >> > > 11 glebe-sata hypervtst-lun02       -    /dev/rbd11
>> >> > > 12 glebe-sata hypervtst-lun03       -    /dev/rbd12
>> >> > > 13 glebe-ssd  nspprd01_lun00        -    /dev/rbd13
>> >> > > 14 glebe-sata cirrux-nfs01          -    /dev/rbd14
>> >> > > 15 glebe-sata hypervtst-lun04       -    /dev/rbd15
>> >> > > 16 glebe-sata hypervtst-lun05       -    /dev/rbd16
>> >> > > 17 glebe-sata pvtcloud-nfs01        -    /dev/rbd17
>> >> > > 18 glebe-sata cloud2sql-lun00       -    /dev/rbd18
>> >> > > 19 glebe-sata cloud2sql-lun01       -    /dev/rbd19
>> >> > > 2  glebe-sata radmast02-lun00       -    /dev/rbd2
>> >> > > 20 glebe-sata cloud2sql-lun02       -    /dev/rbd20
>> >> > > 21 glebe-sata cloud2fs-lun00        -    /dev/rbd21
>> >> > > 22 glebe-sata cloud2fs-lun01        -    /dev/rbd22
>> >> > > 3  glebe-sata radmast02-lun01       -    /dev/rbd3
>> >> > > 4  glebe-sata radmast02-lun02       -    /dev/rbd4
>> >> > > 5  glebe-sata radmast02-lun03       -    /dev/rbd5
>> >> > > 6  glebe-sata radmast02-lun04       -    /dev/rbd6
>> >> > > 7  glebe-ssd  sybase_iquser02_lun00 -    /dev/rbd7
>> >> > > 8  glebe-ssd  sybase_iquser03_lun00 -    /dev/rbd8
>> >> > > 9  glebe-ssd  sybase_iquser04_lun00 -    /dev/rbd9
>> >> > >
>> >> > > [root@ceph-glb-fec-02 ~]# rbd status glebe-sata/hypervtst-lun04
>> >> > > 2016-06-06 10:47:30.221453 7fc0030dc700 -1
>> librbd::image::OpenRequest:
>> >> > > failed to retrieve image id: (5) Input/output error
>> >> > > 2016-06-06 10:47:30.221556 7fc0028db700 -1 librbd::ImageState:
>> >> > > failed to open image: (5) Input/output error
>> >> > > rbd: error opening image hypervtst-lun04: (5) Input/output error
>> >> > > Confidentiality: This email and any attachments are confidential
>> >> > > and may be
>> >> > subject to copyright, legal or some other professional privilege.
>> >> > They are intended solely for the attention and use of the named
>> >> > addressee(s). They may only be copied, distributed or disclosed
>> >> > with the consent of the copyright owner. If you have received this
>> >> > email by mistake or by breach of the confidentiality clause, please
>> >> > notify the sender immediately by return email and delete or destroy
>> >> > all copies of the email. Any confidentiality, privilege or
>> >> > copyright is not waived or lost because this email has been sent to you
>> by mistake.
>> >> > > _______________________________________________
>> >> > > ceph-users mailing list
>> >> > > ceph-users@xxxxxxxxxxxxxx
>> >> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Jason
>> >> Confidentiality: This email and any attachments are confidential and
>> >> may be subject to copyright, legal or some other professional
>> >> privilege. They are intended solely for the attention and use of the
>> >> named addressee(s). They may only be copied, distributed or disclosed
>> >> with the consent of the copyright owner. If you have received this
>> >> email by mistake or by breach of the confidentiality clause, please
>> >> notify the sender immediately by return email and delete or destroy
>> >> all copies of the email. Any confidentiality, privilege or copyright
>> >> is not waived or lost because this email has been sent to you by mistake.
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@xxxxxxxxxxxxxx
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> > Confidentiality: This email and any attachments are confidential and may be
>> subject to copyright, legal or some other professional privilege. They are
>> intended solely for the attention and use of the named addressee(s). They
>> may only be copied, distributed or disclosed with the consent of the
>> copyright owner. If you have received this email by mistake or by breach of
>> the confidentiality clause, please notify the sender immediately by return
>> email and delete or destroy all copies of the email. Any confidentiality,
>> privilege or copyright is not waived or lost because this email has been sent
>> to you by mistake.
>>
>>
>>
>> --
>> Jason
> Confidentiality: This email and any attachments are confidential and may be subject to copyright, legal or some other professional privilege. They are intended solely for the attention and use of the named addressee(s). They may only be copied, distributed or disclosed with the consent of the copyright owner. If you have received this email by mistake or by breach of the confidentiality clause, please notify the sender immediately by return email and delete or destroy all copies of the email. Any confidentiality, privilege or copyright is not waived or lost because this email has been sent to you by mistake.



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux