Thanks for your reply!
I am using erasure coded profile with k=5, m=3 settings
$ ceph osd erasure-code-profile get profile5by3
crush-device-class=
crush-failure-domain=host
crush-root=default
jerasure-per-chunk-alignment=false
k=5
m=3
plugin=jerasure
technique=reed_sol_van
w=8
Cluster has 8 nodes, with 3 disks each. We are planning to add 2 more on each nodes.
If I understand correctly, then I can add 3 disks at once right , assuming 3 disks can fail at a time as per the ec code profile.
Karun Josy
On Tue, Dec 5, 2017 at 12:06 AM, David Turner <drakonstein@xxxxxxxxx> wrote:
Depending on how well you burn-in/test your new disks, I like to only add 1 failure domain of disks at a time in case you have bad disks that you're adding. If you are confident that your disks aren't likely to fail during the backfilling, then you can go with more. I just added 8 servers (16 OSDs each) to a cluster with 15 servers (16 OSDs each) all at the same time, but we spent 2 weeks testing the hardware before adding the new nodes to the cluster.If you add 1 failure domain at a time, then any DoA disks in the new nodes will only be able to fail with 1 copy of your data instead of across multiple nodes.On Mon, Dec 4, 2017 at 12:54 PM Karun Josy <karunjosy1@xxxxxxxxx> wrote:______________________________Hi,Is it recommended to add OSD disks one by one or can I add couple of disks at a time ?Current cluster size is about 4 TB.Karun_________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com