Re: State of Gluster project

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am So., 21. Juni 2020 um 19:43 Uhr schrieb Gionatan Danti <g.danti@xxxxxxxxxx>:

> For the RAID6/10 setup, I found no issues: simply replace the broken
> disk without involing Gluster at all. However, this also means facing
> the "iops wall" I described earlier for single-brick node. Going
> full-Guster with JBODs would be interesting from a performance
> standpoint, but this complicate eventual recovery from bad disks.
>
> Does someone use Gluster in JBOD mode? If so, can you share your
> experience?
> Thanks.

Hi,
we once used gluster with disks in JBOD mode (3 servers, 4x10TB hdd
each, 4 x 3 = 12), and to make it short: in our special case it wasn't
that funny. Big HDDs, lots of small files, (highly) concurrent access
through our application. It was running quite fine, until a disk
failed. The reset-disk took ~30 (!) days, as you have gluster
copying/restoring the data and the normal application read/write.
After the first reset had finished, a couple of days later another
disk died, and the fun started again :-) Maybe a bad use case.

With this experience, the next setup was: splitting data into 2 chunks
(high I/O, low I/O), 3 servers with 2 raid10 (same type of disk), each
raid used as a brick, resulting in replicate 3: 1 x 3 = 3. Changing a
failed disk now results in a complete raid resync, but regarding I/O
this is far better than using reset-disk with a HDD only. Only the
regularly running raid check was a bit of a performance issue.

Latest setup (for the high I/O part) looks like this: 3 servers, 10
disks with 10TB each -> 5 raid1, forming a distribute replicate with 5
bricks, 5 x 3 = 15. No disk has failed so far (fingers crossed), but
if now a disk fails, gluster is still running with all bricks
available, and after changing the failed, there's one raid resync
running, affecting only 1/5 of the volume. In theory that should be
better ;-) The regularly running raid checks are no problem so far,
for 15 raid1 only 1 is running, none parallel.

disclaimer: JBOD may work better with SSDs/NVMes - untested ;-)


Best regards,
Hubert
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux