Re: usage of harddisks: each hdd a brick? raid?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



10.01.2019 11:26, Serkan Çoban пишет:
We ara also using 10TB disks, heal takes 7-8 days.
You can play with "cluster.shd-max-threads" setting. It is default 1 I
think. I am using it with 4.
Below you can find more info:
https://access.redhat.com/solutions/882233

I'm using ovirt, setup script set this values by default:

cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000

Testing could be quite easy: reset-brick start, then delete&re-create
partition/fs/etc., reset-brick commit force - and then watch.

We only have 1 big volume over all bricks. Details:

Volume Name: shared
Type: Distributed-Replicate

A, you have distributed-replicated volume, but I choose only replicated (for beginning simplicity :)

Brick12: gluster13:/gluster/bricksdd1_new/shared

Didn't think about creating more volumes (in order to split data),
e.g. 4 volumes with 3*10TB each, or 2 volumes with 6*10TB each.

May be replicated volume are healing faster?

Yeah, would be interested in how the glusterfs professionsals deal
with faulty disks, especially when these are as big as our ones.




_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux