Re: 90 Brick/Server suggestions?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There may be some helpful information in this article:

http://45drives.blogspot.ca/2016/11/an-introduction-to-clustering-how-to.html

Disclaimer: I don't work for 45drives, I'm just a satisfied customer.

Good luck, and please let us know how this works out for you.

regards,
tp


On Fri, 17 Feb 2017, Serkan Çoban wrote:

We have 12 on order.  Actually the DSS7000 has two nodes in the chassis,
and  each accesses 45 bricks.  We will be using an erasure code scheme
probably 24:3 or 24:4, we have not sat down and really thought about the
exact scheme we will use.

If we cannot get 1 node/90 disk configuration, we also get it as 2
nodes/45 disks each.
Be careful about EC. I am using 16+4 in production, only drawback is
slow rebuild times.
It takes 10 days to rebuild 8TB disk. Although parallel heal for EC
improves it in 3.9,
don't forget to test rebuild times for different EC configurations,

90 disks per server is a lot.  In particular, it might be out of balance with other
characteristics of the machine - number of cores, amount of memory, network
or even bus bandwidth

Nodes will be pretty powerful, 2x18 core CPUs with 256GB RAM and 2X10Gb bonded
ethernet. It will be used for archive purposes so I don't need more
than 1GB/s/node.
RAID is not an option, JBOD with EC will be used.

gluster volume set all cluster.brick-multiplex on
I just read the 3.10 release notes and saw this. I think this is a
good solution,
I plan to use 3.10.x and will probably test multiplexing and get in
touch for help..

Thanks for the suggestions,
Serkan


On Fri, Feb 17, 2017 at 1:39 AM, Jeff Darcy <jdarcy@xxxxxxxxxx> wrote:
We are evaluating dell DSS7000 chassis with 90 disks.
Has anyone used that much brick per server?
Any suggestions, advices?

90 disks per server is a lot.  In particular, it might be out of balance with other characteristics of the machine - number of cores, amount of memory, network or even bus bandwidth.  Most people who put that many disks in a server use some sort of RAID (HW or SW) to combine them into a smaller number of physical volumes on top of which filesystems and such can be built.  If you can't do that, or don't want to, you're in poorly explored territory.  My suggestion would be to try running as 90 bricks.  It might work fine, or you might run into various kinds of contention:

(1) Excessive context switching would indicate not enough CPU.

(2) Excessive page faults would indicate not enough memory.

(3) Maxed-out network ports . . . well, you can figure that one out.  ;)

If (2) applies, you might want to try brick multiplexing.  This is a new feature in 3.10, which can reduce memory consumption by more than 2x in many cases by putting multiple bricks into a single process (instead of one per brick).  This also drastically reduces the number of ports you'll need, since the single process only needs one port total instead of one per brick.  In terms of CPU usage or performance, gains are far more modest.  Work in that area is still ongoing, as is work on multiplexing in general.  If you want to help us get it all right, you can enable multiplexing like this:

  gluster volume set all cluster.brick-multiplex on

If multiplexing doesn't help for you, speak up and maybe we can make it better, or perhaps come up with other things to try.  Good luck!
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux