Re: ceph and rsync

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The fact that you are all SSD I would do exactly what Wido said -
gracefully remove the OSD and gracefully bring up the OSD on the new
SSD.

Let Ceph do what its designed to do. The rsync idea looks great on
paper - not sure what issues you will run into in practise.


On Fri, Dec 16, 2016 at 12:38 PM, Alessandro Brega
<alessandro.brega1@xxxxxxxxx> wrote:
> 2016-12-16 10:19 GMT+01:00 Wido den Hollander <wido@xxxxxxxx>:
>>
>>
>> > Op 16 december 2016 om 9:49 schreef Alessandro Brega
>> > <alessandro.brega1@xxxxxxxxx>:
>> >
>> >
>> > 2016-12-16 9:33 GMT+01:00 Wido den Hollander <wido@xxxxxxxx>:
>> >
>> > >
>> > > > Op 16 december 2016 om 9:26 schreef Alessandro Brega <
>> > > alessandro.brega1@xxxxxxxxx>:
>> > > >
>> > > >
>> > > > Hi guys,
>> > > >
>> > > > I'm running a ceph cluster using 0.94.9-1trusty release on XFS for
>> > > > RBD
>> > > > only. I'd like to replace some SSDs because they are close to their
>> > > > TBW.
>> > > >
>> > > > I know I can simply shutdown the OSD, replace the SSD, restart the
>> > > > OSD
>> > > and
>> > > > ceph will take care of the rest. However I don't want to do it this
>> > > > way,
>> > > > because it leaves my cluster for the time of the rebalance/
>> > > > backfilling
>> > > in
>> > > > a degraded state.
>> > > >
>> > > > I'm thinking about this process:
>> > > > 1. keep old OSD running
>> > > > 2. copy all data from current OSD folder to new OSD folder (using
>> > > > rsync)
>> > > > 3. shutdown old OSD
>> > > > 4. redo step 3 to update to the latest changes
>> > > > 5. restart OSD with new folder
>> > > >
>> > > > Are there any issues with this approach? Do I need any special rsync
>> > > flags
>> > > > (rsync -avPHAX --delete-during)?
>> > > >
>> > >
>> > > Indeed X for transferring xattrs, but also make sure that the
>> > > partitions
>> > > are GPT with the proper GUIDs.
>> > >
>> > > I would never go for this approach in a running setup. Since it's a
>> > > SSD
>> > > cluster I wouldn't worry about the rebalance and just have Ceph do the
>> > > work
>> > > for you.
>> > >
>> > >
>> > Why not - if it's completely safe. It's much faster (local copy),
>> > doesn't
>> > put load on the network (local copy), much safer (2-3 minutes instead of
>> > 1-2 hours degraded time (2TB SSD)), and it's really simple (2 rsync
>> > commands). Thank you.
>> >
>>
>> I wouldn't say it is completely safe, hence my remark. If you copy, indeed
>> make sure you copy all the xattrs, but also make sure the partitions tables
>> match.
>>
>> That way it should work, but it's not a 100% guarantee.
>>
>
> Ok, thanks!  Can a ceph dev confirm? I do not want to loose any data ;)
>
> Alessandro
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux