Re: Is this still accurate?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Its observed at many places that, when we had multiple disks exported (say
around 4 disks exported), the performance was much better with 4 glusterfsd
exporting each disks, than having all the export points in a single server
process, and giving io-threads count 4.

Logically they should perform similar, but not happening in practical cases.
Hence that advice.

On Wed, Apr 30, 2008 at 9:36 PM, Brandon Lamb <brandonlamb@xxxxxxxxx> wrote:

> From the best practices section of the wiki:
> One process for one export works best with the current codebase.
> (Sharing a namespace export in the same server is not a problem
>
> And from the faq from mailing list, first question
> Q > So the question is, what makes this different from running two
> instances on different ports? What kind of check is being done in
> glusterfsd and what are the reasons that this should be prevented?
>
> A You can run multiple glusterfsd instances on the same machine. the
> PID file is only a mechanism which helps you write init.d scripts
> neatly. things you need to take care -
> 1. if you are specifying pidfile, do not specify the same path for
> multiple daemons
> 2. do not export the same backend directory with different glusterfsd
> daemons.
> 3. take care to see to that both do not try to bind to the same tcp port
> etc.
> apart from that, the two daemons are oblivious to each other and they
> can have same volume names defined as well.
> Make sure client and server runs same version of GlusterFS.
>
> -------------------
>
> So what does this mean and is it current? If someone needs to export
> more than one directory, should they run multiple glusterfsd processes
> and configs, or should they configure multiple export bricks and use
> only a single glusterfsd process?
>
>
> _______________________________________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxx
> http://lists.nongnu.org/mailman/listinfo/gluster-devel
>



-- 
Amar Tumballi
Gluster/GlusterFS Hacker
[bulde on #gluster/irc.gnu.org]
http://www.zresearch.com - Commoditizing Super Storage!


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux