Re: State of Gluster project

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Il 2020-06-22 06:58 Hu Bert ha scritto:
Am So., 21. Juni 2020 um 19:43 Uhr schrieb Gionatan Danti <g.danti@xxxxxxxxxx>:

For the RAID6/10 setup, I found no issues: simply replace the broken
disk without involing Gluster at all. However, this also means facing
the "iops wall" I described earlier for single-brick node. Going
full-Guster with JBODs would be interesting from a performance
standpoint, but this complicate eventual recovery from bad disks.

Does someone use Gluster in JBOD mode? If so, can you share your
experience?
Thanks.

Hi,
we once used gluster with disks in JBOD mode (3 servers, 4x10TB hdd
each, 4 x 3 = 12), and to make it short: in our special case it wasn't
that funny. Big HDDs, lots of small files, (highly) concurrent access
through our application. It was running quite fine, until a disk
failed. The reset-disk took ~30 (!) days, as you have gluster
copying/restoring the data and the normal application read/write.
After the first reset had finished, a couple of days later another
disk died, and the fun started again :-) Maybe a bad use case.

Hi Hubert,
this is the exact scenario which scares me if/when using JBOD. Maybe for virtual machine disks (ie: big files) it would be faster, but still...

Latest setup (for the high I/O part) looks like this: 3 servers, 10
disks with 10TB each -> 5 raid1, forming a distribute replicate with 5
bricks, 5 x 3 = 15. No disk has failed so far (fingers crossed), but
if now a disk fails, gluster is still running with all bricks
available, and after changing the failed, there's one raid resync
running, affecting only 1/5 of the volume. In theory that should be
better ;-) The regularly running raid checks are no problem so far,
for 15 raid1 only 1 is running, none parallel.

Ok, so you multiplied the number of bricks by using multiple RAID1 array. Good idea, it should be fine in this manner.

disclaimer: JBOD may work better with SSDs/NVMes - untested ;-)

Yeah, I think so!

Regads.


--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it [1]
email: g.danti@xxxxxxxxxx - info@xxxxxxxxxx
GPG public key ID: FF5F32A8
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux