Re: v12.2.7 Luminous released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



2018-07-18 3:04 GMT+02:00 Linh Vu <vul@xxxxxxxxxxxxxx>:

Thanks for all your hard work in putting out the fixes so quickly! :)

We have a cluster on 12.2.5 with Bluestore and EC pool but for CephFS, not RGW. In the release notes, it says RGW is a risk especially the garbage collection, and the recommendation is to either pause IO or disable RGW garbage collection. 


In our case with CephFS, not RGW, is it a lot less risky to perform the upgrade to 12.2.7 without the need to pause IO? 


I have the same question but now for a 12.2.5 EC cluster doing only RBD. Am i still affected or is this only on RGW workloads?

Furthermore, after the upgrade of the packages to 12.2.7 it is still needed to upgrade/restart the mons/mgrs first i presume?

Kind regards,
Caspar

What does pause IO do? Do current sessions just get queued up and IO resume normally with no problem after unpausing? 


If we have to pause IO, is it better to do something like: pause IO, restart OSDs on one node, unpause IO - repeated for all the nodes involved in the EC pool? 


Regards,

Linh


From: ceph-users <ceph-users-bounces@lists.ceph.com> on behalf of Sage Weil <sage@xxxxxxxxxxxx>
Sent: Wednesday, 18 July 2018 4:42:41 AM
To: Stefan Kooman
Cc: ceph-announce@xxxxxxxx; ceph-devel@xxxxxxxxxxxxxxx; ceph-maintainers@xxxxxxxx; ceph-users@xxxxxxxx
Subject: Re: v12.2.7 Luminous released
 
On Tue, 17 Jul 2018, Stefan Kooman wrote:
> Quoting Abhishek Lekshmanan (abhishek@xxxxxxxx):
>
> > *NOTE* The v12.2.5 release has a potential data corruption issue with
> > erasure coded pools. If you ran v12.2.5 with erasure coding, please see
^^^^^^^^^^^^^^^^^^^
> > below.
>
> < snip >
>
> > Upgrading from v12.2.5 or v12.2.6
> > ---------------------------------
> >
> > If you used v12.2.5 or v12.2.6 in combination with erasure coded
^^^^^^^^^^^^^
> > pools, there is a small risk of corruption under certain workloads.
> > Specifically, when:
>
> < snip >
>
> One section mentions Luminous clusters _with_ EC pools specifically, the other
> section mentions Luminous clusters running 12.2.5.

I think they both do?

> I might be misreading this, but to make things clear for current Ceph
> Luminous 12.2.5 users. Is the following statement correct?
>
> If you do _NOT_ use EC in your 12.2.5 cluster (only replicated pools), there is
> no need to quiesce IO (ceph osd pause).

Correct.

> http://docs.ceph.com/docs/master/releases/luminous/#upgrading-from-other-versions
> If your cluster did not run v12.2.5 or v12.2.6 then none of the above
> issues apply to you and you should upgrade normally.
>
> ^^ Above section would indicate all 12.2.5 luminous clusters.

The intent here is to clarify that any cluster running 12.2.4 or
older can upgrade without reading carefully. If the cluster
does/did run 12.2.5 or .6, then read carefully because it may (or may not)
be affected.

Does that help? Any suggested revisions to the wording in the release
notes that make it clearer are welcome!

Thanks-
sage


>
> Please clarify,
>
> Thanks,
>
> Stefan
>
> --
> | BIT BV http://www.bit.nl/ Kamer van Koophandel 09090351
> | GPG: 0xD14839C6 +31 318 648 688 / info@xxxxxx
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux