Re: [ceph-users] v12.2.7 Luminous released

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Also many thanks from my side! 

Am 18.07.2018 um 03:04 schrieb Linh Vu:
> Thanks for all your hard work in putting out the fixes so quickly! :)
> 
> We have a cluster on 12.2.5 with Bluestore and EC pool but for CephFS, not RGW. In the release notes, it says RGW is a risk especially the garbage collection, and the recommendation is to either pause IO or disable RGW garbage collection. 
> 
> 
> In our case with CephFS, not RGW, is it a lot less risky to perform the upgrade to 12.2.7 without the need to pause IO? 
> 
> 
> What does pause IO do? Do current sessions just get queued up and IO resume normally with no problem after unpausing? 

That's my understanding, pause blocks any reads and writes. If the processes accessing CephFS do not have any wallclock-related timeout handlers, they should be fine IMHO. 
I'm unsure how NFS Ganesha 
But indeed I have the very same question - we also have a pure CephFS cluster, without RGW, EC-pool-backed, on 12.2.5. Should we pause IO during upgrade? 

I wonder whether it is risky / unrisky to upgrade without pausing I/O? 
The update notes in the blog do not state whether a pure CephFS setup is affected. 

Cheers,
	Oliver

> 
> 
> If we have to pause IO, is it better to do something like: pause IO, restart OSDs on one node, unpause IO - repeated for all the nodes involved in the EC pool? 
> 
> 
> Regards,
> 
> Linh
> 
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
> *From:* ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Sage Weil <sage@xxxxxxxxxxxx>
> *Sent:* Wednesday, 18 July 2018 4:42:41 AM
> *To:* Stefan Kooman
> *Cc:* ceph-announce@xxxxxxxx; ceph-devel@xxxxxxxxxxxxxxx; ceph-maintainers@xxxxxxxx; ceph-users@xxxxxxxx
> *Subject:* Re: [ceph-users] v12.2.7 Luminous released
>  
> On Tue, 17 Jul 2018, Stefan Kooman wrote:
>> Quoting Abhishek Lekshmanan (abhishek@xxxxxxxx):
>>
>> > *NOTE* The v12.2.5 release has a potential data corruption issue with
>> > erasure coded pools. If you ran v12.2.5 with erasure coding, please see
> ^^^^^^^^^^^^^^^^^^^
>> > below.
>>
>> < snip >
>>
>> > Upgrading from v12.2.5 or v12.2.6
>> > ---------------------------------
>> >
>> > If you used v12.2.5 or v12.2.6 in combination with erasure coded
> ^^^^^^^^^^^^^
>> > pools, there is a small risk of corruption under certain workloads.
>> > Specifically, when:
>>
>> < snip >
>>
>> One section mentions Luminous clusters _with_ EC pools specifically, the other
>> section mentions Luminous clusters running 12.2.5.
> 
> I think they both do?
> 
>> I might be misreading this, but to make things clear for current Ceph
>> Luminous 12.2.5 users. Is the following statement correct?
>>
>> If you do _NOT_ use EC in your 12.2.5 cluster (only replicated pools), there is
>> no need to quiesce IO (ceph osd pause).
> 
> Correct.
> 
>> http://docs.ceph.com/docs/master/releases/luminous/#upgrading-from-other-versions
>> If your cluster did not run v12.2.5 or v12.2.6 then none of the above
>> issues apply to you and you should upgrade normally.
>>
>> ^^ Above section would indicate all 12.2.5 luminous clusters.
> 
> The intent here is to clarify that any cluster running 12.2.4 or
> older can upgrade without reading carefully. If the cluster
> does/did run 12.2.5 or .6, then read carefully because it may (or may not)
> be affected.
> 
> Does that help? Any suggested revisions to the wording in the release
> notes that make it clearer are welcome!
> 
> Thanks-
> sage
> 
> 
>>
>> Please clarify,
>>
>> Thanks,
>>
>> Stefan
>>
>> --
>> | BIT BV http://www.bit.nl/ Kamer van Koophandel 09090351
>> | GPG: 0xD14839C6 +31 318 648 688 / info@xxxxxx
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux