> We are evaluating dell DSS7000 chassis with 90 disks. > Has anyone used that much brick per server? > Any suggestions, advices? 90 disks per server is a lot. In particular, it might be out of balance with other characteristics of the machine - number of cores, amount of memory, network or even bus bandwidth. Most people who put that many disks in a server use some sort of RAID (HW or SW) to combine them into a smaller number of physical volumes on top of which filesystems and such can be built. If you can't do that, or don't want to, you're in poorly explored territory. My suggestion would be to try running as 90 bricks. It might work fine, or you might run into various kinds of contention: (1) Excessive context switching would indicate not enough CPU. (2) Excessive page faults would indicate not enough memory. (3) Maxed-out network ports . . . well, you can figure that one out. ;) If (2) applies, you might want to try brick multiplexing. This is a new feature in 3.10, which can reduce memory consumption by more than 2x in many cases by putting multiple bricks into a single process (instead of one per brick). This also drastically reduces the number of ports you'll need, since the single process only needs one port total instead of one per brick. In terms of CPU usage or performance, gains are far more modest. Work in that area is still ongoing, as is work on multiplexing in general. If you want to help us get it all right, you can enable multiplexing like this: gluster volume set all cluster.brick-multiplex on If multiplexing doesn't help for you, speak up and maybe we can make it better, or perhaps come up with other things to try. Good luck! _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users