Re: Out-of-date RBD client libraries

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



CRUSH is what determines where data gets stored, so if you employ newer CRUSH tunables prematurely against older clients that don’t support them, then you run the risk of your clients not being able to find nor place objects correctly. I don’t know Ceph’s internals well enough to tell you all of what might result at a lower level from such a scenario, but clients not knowing where data belongs seems bad enough. I wouldn’t necessarily expect data loss, but potentially a lot of client errors.

 

From: jdavidlists@xxxxxxxxx [mailto:jdavidlists@xxxxxxxxx] On Behalf Of J David
Sent: Tuesday, October 25, 2016 1:27 PM
To: Steve Taylor <steve.taylor@xxxxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: [ceph-users] Out-of-date RBD client libraries

 

 

On Tue, Oct 25, 2016 at 3:10 PM, Steve Taylor <steve.taylor@xxxxxxxxxxxxxxxx> wrote:

Recently we tested an upgrade from 0.94.7 to 10.2.3 and found exactly the opposite. Upgrading the clients first worked for many operations, but we got "function not implemented" errors when we would try to clone RBD snapshots.

 

Yes, we have seen “function not implemented” in the past as well when connecting new clients to old clusters.

 

you must keep your CRUSH tunables at firefly or hammer until the clients are upgraded.

 

Not that I am proposing to try it, but… or else what?

 

Whatever the “or else!” is, the same would apply, I assume, to connecting old clients to a brand-new jewel cluster which would have been created with jewel tunables in the first place?

 

Thanks!

 


Steve Taylor | Senior Software Engineer | StorageCraft Technology Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2799 |


If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux