Re: Performance: lots of small files, hdd, nvme etc.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Diego,

> > Just an observation: is there a performance difference between a sw
> > raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick)
> Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks.

Maybe i was unprecise?

md3 : active raid10 sdh1[7] sde1[4] sda1[0] sdg1[6] sdc1[2] sdd1[3]
sdf1[5] sdb1[1] sdi1[8] sdj1[9]
     48831518720 blocks super 1.2 512K chunks 2 near-copies [10/10] [UUUUUUUUUU]

mdadm --detail /dev/md3
/dev/md3:
          Version : 1.2
    Creation Time : Fri Jan 18 08:59:51 2019
       Raid Level : raid10
[...]
   Number   Major   Minor   RaidDevice State
      0       8        1        0      active sync set-A   /dev/sda1
      1       8       17        1      active sync set-B   /dev/sdb1
      2       8       33        2      active sync set-A   /dev/sdc1
      3       8       49        3      active sync set-B   /dev/sdd1
      4       8       65        4      active sync set-A   /dev/sde1
      5       8       81        5      active sync set-B   /dev/sdf1
      9       8      145        6      active sync set-A   /dev/sdj1
      8       8      129        7      active sync set-B   /dev/sdi1
      7       8      113        8      active sync set-A   /dev/sdh1
      6       8       97        9      active sync set-B   /dev/sdg1


> > with
> > the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario
> > seems faster. Just out of curiosity...
> It should be, since the bricks are smaller. But given you're using a
> replica 3 I don't understand why you're also using RAID1: for each 10T
> of user-facing capacity you're keeping 60TB of data on disks.
> I'd ditch local RAIDs to double the space available. Unless you
> desperately need the extra read performance.

Well, looooong time ago we used 10TB disks as bricks (JBOD). replicate
3 setup. Then one of the bricks failed: the volume was ok (since 2
bricks were left), but after the hdd change the reset-brick produced a
very high load/iowait. So a raid1 or raid10 is the attempt to avoid
the reset-brick in favor of a sw raid rebuild - iirc this can run with
a lower priority -> less problems in the running system.


Best regards,
Hubert
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux