Re: Rebuild OSD's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There are two options that can reduce the load on your cluster from rebuilding so much. If you don't have many OSDs, it would be better to make the ZFS OSD out first and let the cluster become healthy, then proceed to remove the OSD and rebuild it.

If you have a lot of OSDs, then the CRUSH changes can cause a lot of data movement and you may be better benefited by telling Ceph to chill out until the disk is replaced. To do this set nobackfill and norecover, then remove the OSD and add it back in, then unset nobackfill and norecover. This will prevent data from moving around while the OSD is removed and readded, so the only data movement would be rebuilding onto the newly formatted OSD. Be aware that your cluster could be susceptible to another disk failure that can cause IO to stop if the failing disk happens to have a PG from OSD you are rebuilding. Having many OSDs helps reduce this, but you will want to do this quickly regardless. If you are really worried about a disk failure, you can set the min_size to 1 during the operation, but that has it's own risks as well.

Method 1 will take more time, but there is little chance of any problems and will cause more disk activity for a longer period of time impacting performance if you have not adjusted max_backfill and other related options.

Robert LeBlanc

On Sat, Nov 29, 2014 at 3:29 PM, Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx> wrote:
I have 2 OSD's on two nodes top of zfs that I'd like to rebuild in a more
standard (xfs) setup.

Would the following be a non destructive if somewhat tedious way of doing so?

Following the instructions from here:

  http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual

1. Remove osd.0
2. Recreate osd.0
3. Add. osd.0
4. Wait for health to be restored
    i.e all data be copied from osd.1 to osd.0

5. Remove osd.1
6. Recreate osd.1
7. Add. osd.1
8. Wait for health to be restored
    i.e all data be copied from osd.0 to osd.1

9. Profit!


There's 1TB of data total. I can do this after hours while the system &
network is not being used

I do have complete backups in case it all goes pear shaped.

thanks,
--
Lindsay

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux