Re: usage of harddisks: each hdd a brick? raid?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We ara also using 10TB disks, heal takes 7-8 days.
You can play with "cluster.shd-max-threads" setting. It is default 1 I
think. I am using it with 4.
Below you can find more info:
https://access.redhat.com/solutions/882233

On Thu, Jan 10, 2019 at 9:53 AM Hu Bert <revirii@xxxxxxxxxxxxxx> wrote:
>
> Hi Mike,
>
> > We have similar setup, and I do not test restoring...
> > How many volumes do you have - one volume on one (*3) disk 10 TB in size
> >   - then 4 volumes?
>
> Testing could be quite easy: reset-brick start, then delete&re-create
> partition/fs/etc., reset-brick commit force - and then watch.
>
> We only have 1 big volume over all bricks. Details:
>
> Volume Name: shared
> Type: Distributed-Replicate
> Number of Bricks: 4 x 3 = 12
> Brick1: gluster11:/gluster/bricksda1/shared
> Brick2: gluster12:/gluster/bricksda1/shared
> Brick3: gluster13:/gluster/bricksda1/shared
> Brick4: gluster11:/gluster/bricksdb1/shared
> Brick5: gluster12:/gluster/bricksdb1/shared
> Brick6: gluster13:/gluster/bricksdb1/shared
> Brick7: gluster11:/gluster/bricksdc1/shared
> Brick8: gluster12:/gluster/bricksdc1/shared
> Brick9: gluster13:/gluster/bricksdc1/shared
> Brick10: gluster11:/gluster/bricksdd1/shared
> Brick11: gluster12:/gluster/bricksdd1_new/shared
> Brick12: gluster13:/gluster/bricksdd1_new/shared
>
> Didn't think about creating more volumes (in order to split data),
> e.g. 4 volumes with 3*10TB each, or 2 volumes with 6*10TB each.
>
> Just curious: after splitting into 2 or more volumes - would that make
> the volume with the healthy/non-restoring disks better accessable? And
> only the volume with the once faulty and now restoring disk would be
> in a "bad mood"?
>
> > > Any opinions on that? Maybe it would be better to use more servers and
> > > smaller disks, but this isn't possible at the moment.
> > Also interested. We can swap SSDs to HDDs for RAID10, but is it worthless?
>
> Yeah, would be interested in how the glusterfs professionsals deal
> with faulty disks, especially when these are as big as our ones.
>
>
> Thx
> Hubert
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux