Re: 12.2.6 upgrade

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks, we are fully bluestore and therefore just set osd skip data digest = true

Kind regards,
Glen Baars

-----Original Message-----
From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
Sent: Friday, 20 July 2018 4:08 PM
To: Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re:  12.2.6 upgrade

That's right. But please read the notes carefully to understand if you need to set
   osd skip data digest = true
or
   osd distrust data digest = true

.. dan

On Fri, Jul 20, 2018 at 10:02 AM Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx> wrote:
>
> I saw that on the release notes.
>
> Does that mean that the active+clean+inconsistent PGs will be OK?
>
> Is the data still getting replicated even if inconsistent?
>
> Kind regards,
> Glen Baars
>
> -----Original Message-----
> From: Dan van der Ster <dan@xxxxxxxxxxxxxx>
> Sent: Friday, 20 July 2018 3:57 PM
> To: Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx>
> Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  12.2.6 upgrade
>
> CRC errors are expected in 12.2.7 if you ran 12.2.6 with bluestore.
> See
> https://ceph.com/releases/12-2-7-luminous-released/#upgrading-from-v12
> -2-6
>
> On Fri, Jul 20, 2018 at 8:30 AM Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx> wrote:
> >
> > Hello Ceph Users,
> >
> >
> >
> > We have upgraded all nodes to 12.2.7 now. We have 90PGs ( ~2000 scrub errors ) to fix from the time when we ran 12.2.6. It doesn’t seem to be affecting production at this time.
> >
> >
> >
> > Below is the log of a PG repair. What is the best way to correct these errors? Is there any further information required?
> >
> >
> >
> > rados list-inconsistent-obj 1.275 --format=json-pretty
> >
> > {
> >
> >     "epoch": 38481,
> >
> >     "inconsistents": []
> >
> > }
> >
> >
> >
> > Is it odd that it doesn’t list any inconsistents?
> >
> >
> >
> > Ceph.log entries for this PG.
> >
> > 2018-07-20 12:13:28.381903 osd.124 osd.124 10.4.35.36:6810/1865422
> > 81 : cluster [ERR] 1.275 shard 100: soid
> > 1:ae423e16:::rbd_data.37c2374b0dc51.000000000004917b:head
> > data_digest 0x1a131dab != data_digest 0x92f2c4c8 from auth oi
> > 1:ae423e16:::rbd_data.37c2374b0dc51.000000000004917b:head(37917'3148
> > 36 client.1079025.0:24453722 dirty|data_digest|omap_digest s 4194304
> > uv 314836 dd 92f2c4c8 od ffffffff alloc_hint [4194304 4194304 0])
> >
> > 2018-07-20 12:13:28.381907 osd.124 osd.124 10.4.35.36:6810/1865422
> > 82 : cluster [ERR] 1.275 shard 124: soid
> > 1:ae423e16:::rbd_data.37c2374b0dc51.000000000004917b:head
> > data_digest 0x1a131dab != data_digest 0x92f2c4c8 from auth oi
> > 1:ae423e16:::rbd_data.37c2374b0dc51.000000000004917b:head(37917'3148
> > 36 client.1079025.0:24453722 dirty|data_digest|omap_digest s 4194304
> > uv 314836 dd 92f2c4c8 od ffffffff alloc_hint [4194304 4194304 0])
> >
> > 2018-07-20 12:13:28.381909 osd.124 osd.124 10.4.35.36:6810/1865422
> > 83 : cluster [ERR] 1.275 soid
> > 1:ae423e16:::rbd_data.37c2374b0dc51.000000000004917b:head: failed to
> > pick suitable auth object
> >
> > 2018-07-20 12:15:15.310579 osd.124 osd.124 10.4.35.36:6810/1865422
> > 84 : cluster [ERR] 1.275 shard 100: soid
> > 1:ae455519:::rbd_data.3844874b0dc51.00000000000293f2:head
> > data_digest 0xdf907335 != data_digest 0x38400b00 from auth oi
> > 1:ae455519:::rbd_data.3844874b0dc51.00000000000293f2:head(38269'3306
> > 51 client.232404.0:23912666 dirty|data_digest|omap_digest s 4194304
> > uv 307138 dd 38400b00 od ffffffff alloc_hint [4194304 4194304 0])
> >
> > 2018-07-20 12:15:15.310582 osd.124 osd.124 10.4.35.36:6810/1865422
> > 85 : cluster [ERR] 1.275 shard 124: soid
> > 1:ae455519:::rbd_data.3844874b0dc51.00000000000293f2:head
> > data_digest 0xdf907335 != data_digest 0x38400b00 from auth oi
> > 1:ae455519:::rbd_data.3844874b0dc51.00000000000293f2:head(38269'3306
> > 51 client.232404.0:23912666 dirty|data_digest|omap_digest s 4194304
> > uv 307138 dd 38400b00 od ffffffff alloc_hint [4194304 4194304 0])
> >
> > 2018-07-20 12:15:15.310584 osd.124 osd.124 10.4.35.36:6810/1865422
> > 86 : cluster [ERR] 1.275 soid
> > 1:ae455519:::rbd_data.3844874b0dc51.00000000000293f2:head: failed to
> > pick suitable auth object
> >
> > 2018-07-20 12:16:07.518970 osd.124 osd.124 10.4.35.36:6810/1865422
> > 87 : cluster [ERR] 1.275 shard 100: soid
> > 1:ae470eb2:::rbd_data.37c2374b0dc51.0000000000049a4b:head
> > data_digest 0x6555a7c9 != data_digest 0xbad822f from auth oi
> > 1:ae470eb2:::rbd_data.37c2374b0dc51.0000000000049a4b:head(37917'3148
> > 79 client.1079025.0:24564045 dirty|data_digest|omap_digest s 4194304
> > uv 314879 dd bad822f od ffffffff alloc_hint [4194304 4194304 0])
> >
> > 2018-07-20 12:16:07.518975 osd.124 osd.124 10.4.35.36:6810/1865422
> > 88 : cluster [ERR] 1.275 shard 124: soid
> > 1:ae470eb2:::rbd_data.37c2374b0dc51.0000000000049a4b:head
> > data_digest 0x6555a7c9 != data_digest 0xbad822f from auth oi
> > 1:ae470eb2:::rbd_data.37c2374b0dc51.0000000000049a4b:head(37917'3148
> > 79 client.1079025.0:24564045 dirty|data_digest|omap_digest s 4194304
> > uv 314879 dd bad822f od ffffffff alloc_hint [4194304 4194304 0])
> >
> > 2018-07-20 12:16:07.518977 osd.124 osd.124 10.4.35.36:6810/1865422
> > 89 : cluster [ERR] 1.275 soid
> > 1:ae470eb2:::rbd_data.37c2374b0dc51.0000000000049a4b:head: failed to
> > pick suitable auth object
> >
> > 2018-07-20 12:16:29.476778 osd.124 osd.124 10.4.35.36:6810/1865422
> > 90 : cluster [ERR] 1.275 shard 100: soid
> > 1:ae47e410:::rbd_data.37c2374b0dc51.0000000000024b09:head
> > data_digest 0xa394e845 != data_digest 0xd8aa931c from auth oi
> > 1:ae47e410:::rbd_data.37c2374b0dc51.0000000000024b09:head(33683'3022
> > 24 client.1079025.0:22963765 dirty|data_digest|omap_digest s 4194304
> > uv 302224 dd d8aa931c od ffffffff alloc_hint [4194304 4194304 0])
> >
> > 2018-07-20 12:16:29.476783 osd.124 osd.124 10.4.35.36:6810/1865422
> > 91 : cluster [ERR] 1.275 shard 124: soid
> > 1:ae47e410:::rbd_data.37c2374b0dc51.0000000000024b09:head
> > data_digest 0xa394e845 != data_digest 0xd8aa931c from auth oi
> > 1:ae47e410:::rbd_data.37c2374b0dc51.0000000000024b09:head(33683'3022
> > 24 client.1079025.0:22963765 dirty|data_digest|omap_digest s 4194304
> > uv 302224 dd d8aa931c od ffffffff alloc_hint [4194304 4194304 0])
> >
> > 2018-07-20 12:16:29.476787 osd.124 osd.124 10.4.35.36:6810/1865422
> > 92 : cluster [ERR] 1.275 soid
> > 1:ae47e410:::rbd_data.37c2374b0dc51.0000000000024b09:head: failed to
> > pick suitable auth object
> >
> > 2018-07-20 12:19:59.498922 osd.124 osd.124 10.4.35.36:6810/1865422
> > 93 : cluster [ERR] 1.275 shard 100: soid
> > 1:ae4de127:::rbd_data.37c2374b0dc51.000000000002f6a6:head
> > data_digest 0x2008cb1b != data_digest 0x218b7cb4 from auth oi
> > 1:ae4de127:::rbd_data.37c2374b0dc51.000000000002f6a6:head(37426'3067
> > 44 client.1079025.0:23363742 dirty|data_digest|omap_digest s 4194304
> > uv 306744 dd 218b7cb4 od ffffffff alloc_hint [4194304 4194304 0])
> >
> > 2018-07-20 12:19:59.498925 osd.124 osd.124 10.4.35.36:6810/1865422
> > 94 : cluster [ERR] 1.275 shard 124: soid
> > 1:ae4de127:::rbd_data.37c2374b0dc51.000000000002f6a6:head
> > data_digest 0x2008cb1b != data_digest 0x218b7cb4 from auth oi
> > 1:ae4de127:::rbd_data.37c2374b0dc51.000000000002f6a6:head(37426'3067
> > 44 client.1079025.0:23363742 dirty|data_digest|omap_digest s 4194304
> > uv 306744 dd 218b7cb4 od ffffffff alloc_hint [4194304 4194304 0])
> >
> > 2018-07-20 12:19:59.498927 osd.124 osd.124 10.4.35.36:6810/1865422
> > 95 : cluster [ERR] 1.275 soid
> > 1:ae4de127:::rbd_data.37c2374b0dc51.000000000002f6a6:head: failed to
> > pick suitable auth object
> >
> > 2018-07-20 12:20:29.937564 osd.124 osd.124 10.4.35.36:6810/1865422
> > 96 : cluster [ERR] 1.275 shard 100: soid
> > 1:ae4f1dd8:::rbd_data.7695c59bb0bc2.00000000000005bb:head
> > data_digest 0x1b42858b != data_digest 0x69a5f3de from auth oi
> > 1:ae4f1dd8:::rbd_data.7695c59bb0bc2.00000000000005bb:head(38220'3284
> > 63 client.1084539.0:403248048 dirty|data_digest|omap_digest s
> > 4194304 uv 308146 dd 69a5f3de od ffffffff alloc_hint [4194304
> > 4194304 0])
> >
> > 2018-07-20 12:20:29.937568 osd.124 osd.124 10.4.35.36:6810/1865422
> > 97 : cluster [ERR] 1.275 shard 124: soid
> > 1:ae4f1dd8:::rbd_data.7695c59bb0bc2.00000000000005bb:head
> > data_digest 0x1b42858b != data_digest 0x69a5f3de from auth oi
> > 1:ae4f1dd8:::rbd_data.7695c59bb0bc2.00000000000005bb:head(38220'3284
> > 63 client.1084539.0:403248048 dirty|data_digest|omap_digest s
> > 4194304 uv 308146 dd 69a5f3de od ffffffff alloc_hint [4194304
> > 4194304 0])
> >
> > 2018-07-20 12:20:29.937570 osd.124 osd.124 10.4.35.36:6810/1865422
> > 98 : cluster [ERR] 1.275 soid
> > 1:ae4f1dd8:::rbd_data.7695c59bb0bc2.00000000000005bb:head: failed to
> > pick suitable auth object
> >
> > 2018-07-20 12:21:07.463206 osd.124 osd.124 10.4.35.36:6810/1865422
> > 99 : cluster [ERR] 1.275 repair 12 errors, 0 fixed
> >
> >
> >
> > Kind regards,
> >
> > Glen Baars
> >
> >
> >
> > From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> On Behalf Of
> > Glen Baars
> > Sent: Wednesday, 18 July 2018 10:33 PM
> > To: ceph-users@xxxxxxxxxxxxxx
> > Subject:  10.2.6 upgrade
> >
> >
> >
> > Hello Ceph Users,
> >
> >
> >
> > We installed 12.2.6 on a single node in the cluster ( new node
> > added, 80TB moved )
> >
> > Disabled scrub/deepscrub once the issues with 12.2.6 were discovered.
> >
> >
> >
> > Today we upgrade the one affected node to 12.2.7 today, set osd skip data digest = true and re enabled the scrubs. It’s a 500TB all bluestore cluster.
> >
> >
> >
> > We are now seeing inconsistent PGs and scrub errors now the scrubbing has resumed.
> >
> >
> >
> > What is the best way forward?
> >
> >
> >
> > Upgrade all nodes to 12.2.7?
> > Remove the 12.2.7 node and rebuild?
> >
> > Kind regards,
> >
> > Glen Baars
> >
> > BackOnline Manager
> >
> > This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately.
> >
> > This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately.
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately.
This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux